{"id":46733,"date":"2023-01-27T00:00:00","date_gmt":"2023-01-27T08:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/stream-data-with-griddb-and-kafka\/"},"modified":"2025-11-13T12:56:19","modified_gmt":"2025-11-13T20:56:19","slug":"stream-data-with-griddb-and-kafka","status":"publish","type":"post","link":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/","title":{"rendered":"Stream Data with GridDB and Kafka"},"content":{"rendered":"<p>Apache Kafka is a tool which allows for &#8220;real-time processing of record streams&#8221;. What this means is that you can send real-time data from your sensors or various other pieces of tools directly into something else, and in this case, directly into GridDB. You can also do the opposite: you can stream data from your GridDB containers directly into some other tool like logging analytics or various other tools.<\/p>\n<p>We have written before about using Apache Kafka to load real-time data directly into your GridDB server using a JDBC connection. You can read our previous blogs with the following links: <a href=\"https:\/\/griddb.net\/en\/blog\/using-kafka-with-griddb\/\">Using Kafka With GridDB<\/a> &amp; <a href=\"https:\/\/griddb.net\/en\/blog\/using-griddb-as-a-source-for-kafka-with-jdbc\/\">Using GridDB as a source for Kafka with JDBC<\/a>.<\/p>\n<p>For this article, we will again be using Kafka in conjunction with GridDB with the newly released GridDB Kafka Connector. As stated above, our previous articles focused on marrying GridDB and Kafka via JDBC, but with the help of the newly released connector, JDBC is no longer a piece of the equation; we can now interface with GridDB directly from Kafka through the sink and source connectors. The GridDB Kafka sink connector pushes data from Apache Kafka topics and persists the data to GridDB database tables. And the source connector works in the opposite fashion, pulling data from GridDB and putting it into Kafka topics.<\/p>\n<p>To showcase this, we will be showing off both types of connector, that is, pushing data from our kafka &#8220;topics&#8221; directly into our running GridDB server (Sink) and then vice-versa (Source).<\/p>\n<p>And because it does get a bit confusing trying to follow along with this written content, I will say up front that you will eventually need to have 5 different terminals open:<\/p>\n<ul>\n<li>Terminal 1: For GridDB gs_sh (to verify GridDB operations)<\/li>\n<li>Terminal 2 &amp; 3: for running Kafka Zookeper and Kafka server<\/li>\n<li>Terminal 4: for running GridDB Kafka sink and source connector<\/li>\n<li>Terminal 5: for running and reading the script, and checking data<\/li>\n<\/ul>\n<h2>Installing and Setting up Environment\/Terminals<\/h2>\n<p>Continuing on with this article, we will install all prereqs and get the proper servers\/scripts running, create our Kafka <a href=\"https:\/\/kafka.apache.org\/intro#intro_concepts_and_terms\">&#8220;topics&#8221;<\/a>, push and save the data to our GridDB server (Sink Connector), do a live showcase, and then finally pull data from GridDB directly into our Kafka topics (Source Connector)<\/p>\n<h3>Preqs<\/h3>\n<p>To follow along, please have the following ready:<\/p>\n<ul>\n<li>Java<\/li>\n<li>Maven (only if building the GridDB Kafka Connector from source)<\/li>\n<li><a href=\"https:\/\/kafka.apache.org\/downloads\">Kafka<\/a><\/li>\n<li>\n<p><a href=\"https:\/\/docs.griddb.net\/latest\/gettingstarted\/using-apt\/\">GridDB<\/a><\/p>\n<p>As another resource, you can also take a look at the source code here: https:\/\/github.com\/griddbnet\/Blogs\/tree\/kafka. From within this repo, you will have access to the basics (ie. the config files, the data, and the bash script), but will still need to download GridDB, Kafka and the GridDB Kafka connector.<\/p>\n<\/li>\n<\/ul>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ git clone https:\/\/github.com\/griddbnet\/Blogs.git --branch kafka<\/code><\/pre>\n<\/div>\n<p>Another thing I would recommend to do is to add the path to your kafka bin directory into your PATH. You can do everytime you open up a terminal, or you can add it to your profile: <code>$ vim ~\/.bashrc<\/code> And add to your path:<\/p>\n<pre><code class=\"bash\">export PATH=$PATH:\/home\/israel\/kafka_project\/kafka_2.13-3.2.1\/bin\n<\/code><\/pre>\n<p>And now you should have the kafka script available whenever you open up a new terminal. And if the worst case it doesn&#8217;t work, you can force it: <code>$ source ~\/.bashrc<\/code>.<\/p>\n<h3>GridDB Kafka Connector (Installation)<\/h3>\n<p>Head over to the Connector&#8217;s repo and clone it:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ git clone https:\/\/github.com\/griddb\/griddb-kafka-connect.git\n$ cd griddb-kafka-connect\/<\/code><\/pre>\n<\/div>\n<p>Once you have downloaded the Kafka connector, we can build the needed <code>.jar<\/code> file or take it directly from the repository shared earlier in this article and move the resulting file into the proper location. To build:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ mvn clean install<\/code><\/pre>\n<\/div>\n<p>If you don&#8217;t want to build your own file, you can also grab a copy of the file that has been included with the repository on GitHub.<\/p>\n<p>From there, simply copy over the <code>.jar<\/code> file (<code>griddb-kafka-connector-0.5.jar<\/code>) to your kafka directory <code>.\/libs<\/code> directory.<\/p>\n<h3>Installing Kafka<\/h3>\n<p>To install kafka:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ wget https:\/\/archive.apache.org\/dist\/kafka\/3.2.0\/kafka_2.12-3.2.0.tgz\n$ tar xzvf kafka_2.12-3.2.0.tgz\n$ cd kafka_2.12-3.2.0\n$ export PATH=$PATH:$PWD\/bin<\/code><\/pre>\n<\/div>\n<p>A quick note: adding kafka to your PATH environment variable is only done in that specific terminal session; if you open up a second (or third, etc) terminal, you will need to re-export the environment variable or just manually use the enter path to the scripts. Of course, you can also add it your user&#8217;s <code>.basrc<\/code> or <code>.bash_profile<\/code> file.<\/p>\n<h3>Setting up GridDB Sink Config File<\/h3>\n<p>To set up your GridDB configuration, you can copy griddb-sink.properties from the Blogs folder (from the GitHub repository shared earlier) to kafka\/config. If you would like to edit this portion manually, read on.<\/p>\n<p>Please edit the config file to enter in your running GridDB&#8217;s servers credentials as well as the topics we aim to take in via Kafka topics. The file in question is: <code>griddb-sink.properties<\/code><\/p>\n<p>And now we can enter in our GridDB server information. And because we are running in FIXED_LIST mode, we will edit the notification member as well and remove the host and port. Lastly, let&#8217;s add in the topics we mean to ingest into our GridDB server:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">#host=239.0.0.1\n#port=31999\ncluster.name=myCluster\nuser=admin\npassword=admin\nnotification.member=127.0.0.1:10001\n#notification.provider.url=\n\n#topics.regex=csh(.*)\n#topics.regex=topic.(.*)\ntopics=device7,device8,device9,device10\n\ntransforms=TimestampConverter\ntransforms.TimestampConverter.type=org.apache.kafka.connect.transforms.TimestampConverter$Value\ntransforms.TimestampConverter.format=yyyy-MM-dd hh:mm:ss.SSS\ntransforms.TimestampConverter.field=ts\ntransforms.TimestampConverter.target.type=Timestamp<\/code><\/pre>\n<\/div>\n<p>Here we are explicitly telling our GridDB Sink Connector to look for these exact topics. We of course could simply use regex if we wanted to be more general, but for demo purposes, this is okay.<\/p>\n<p>We also need to change our format for the timestamp to include milliseconds at the end and we need to set our timestamp&#8217;s column name, in this case <code>ts<\/code>.<\/p>\n<p>Once this file is edited, please make a copy into your <code>kafka_2.13-3.2.1\/config<\/code> directory.<\/p>\n<h3>Setting up GridDB Source Config File<\/h3>\n<p>If you wanted to do the reverse of what was described (ie. pushing saved GridDB containers out onto Kafka), you will instead use the GridDB source connector. Firstly, you will need to edit the config file: <code>griddb-source.properties<\/code> in GRIDDB_KAFKA_CONNECTOR_FOLDER\/config. You will need to change the GridDB connection details as well as the containers\/topics. Of course, similiar to the previous config file, the version we used for this article is included with the GitHub repository: <a href=\"https:\/\/github.com\/griddbnet\/Blogs\/blob\/kafka\/griddb-source.properties\">here<\/a><\/p>\n<p>For the containers section, let&#8217;s change them directly to these large datasets we ingested from <a href=\"https:\/\/www.kaggle.com\/code\/rjconstable\/environmental-sensor-telemetry-dataset\">Kaggle<\/a>.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">containers=device1,device2,device3,device4<\/code><\/pre>\n<\/div>\n<p>And one of the major differences between these two files (sink vs source) is that the source will need the following parameter (mandatory) <code>timestamp.column.name<\/code>. For this, we set it to our Timestamp row key, which in the device7 is <code>ts<\/code><\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">timestamp.column.name=ts\nmode=timestamp<\/code><\/pre>\n<\/div>\n<p>Note: you could also use <code>mode=batch<\/code> which will not use the <code>timestamp.column.name<\/code> parameter and will loop and update your dataset many times. Whereas the former mode (timestamp) will grab the data just one time (&#8217;til the queue is exhausted).<\/p>\n<p>And with that, we should be ready to start our processes\/servers.<\/p>\n<p>Again, please make a copy into your <code>kafka_2.13-3.2.1\/config<\/code> directory.<\/p>\n<h2>Starting Up Necessary Servers\/Systems<\/h2>\n<p>Since everything is in place now, let&#8217;s do a quick recap of our current dir structure. So, if following along, there should be three directories ready for use:<\/p>\n<p>\/home\/you\/kafka_project\/<br \/>\n\u251c\u2500 kafka_2.13-3.2.1\/<br \/>\n\u251c\u2500 griddb-kafka-connect\/<br \/>\n\u251c\u2500 Blogs\/<\/p>\n<p>The first directory (<code>kafka_2.13-3.2.1<\/code>) is the main kafka directory. The 2nd directory (<code>griddb-kafka-connect\/<\/code>) is the GridDB kafka connector directory; this directory contains our GridDB-specific config files (either manually edited or taken directly from GitHub, though these files should be copied over to your kafka directory). And the third directory (<code>Blogs<\/code>) is the one built for this blog, it contains the kafka settings which should be copied over to the Kafka directory where applicable.<\/p>\n<p>Now let&#8217;s finally get this running.<\/p>\n<h3>Start GridDB Server (Terminal 1)<\/h3>\n<p>First and foremost, we will need to run GridDB. To do so, you can follow the documentation: <a href=\"https:\/\/docs.griddb.net\/gettingstarted\/using-apt\/\">https:\/\/docs.griddb.net\/gettingstarted\/using-apt\/<\/a><\/p>\n<p>Once installed, start GridDB.<\/p>\n<pre><code>$ sudo systemctl start gridstore\n<\/code><\/pre>\n<p>You can keep this terminal open to run <code>gs_sh<\/code> to be able to interact with GridDB through the terminal. To do so:<\/p>\n<pre><code class=\"bash\">$ sudo su gsadm\n$ gs_sh\ngs&gt; \n<\/code><\/pre>\n<h3>Kafka Prep<\/h3>\n<p>Next, to get this to work, you will also need to add the kafka directory to your path as mentioned above in the first section where we installed Kafka. Here is that command again:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ export PATH=$PATH:\/path\/to\/kafka_2.13-3.2.1\/bin<\/code><\/pre>\n<\/div>\n<h3>Starting Kafka (Terminal 2 &amp; 3)<\/h3>\n<p>Let&#8217;s start the kafka zookeeper and server with terminal 2. From the <code>kafka_2.13-3.2.1<\/code> directory:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ zookeeper-server-start.sh config\/zookeeper.properties<\/code><\/pre>\n<\/div>\n<p>And then in another terminal (3) (again, from the <code>kafka_2.13-3.2.1<\/code> directory):<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ kafka-server-start.sh config\/server.properties<\/code><\/pre>\n<\/div>\n<p>And now our Kafka is mostly ready to go. To continue, please open up a fourth terminal.<\/p>\n<h3>Run GridDB Kafka Connectors (Terminal 4)<\/h3>\n<p>Let&#8217;s now start up our we can finally start up the GridDB sink and source connectors in the same terminal with one command.<\/p>\n<p>To do so, from the kafka directory, run:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\"> \n$ connect-standalone.sh config\/connect-standalone.properties PATH_TO_KAFKA\/config\/griddb-sink.properties PATH_TO_KAFKA\/config\/griddb-source.properties<\/code><\/pre>\n<\/div>\n<h2>Using the GridDB Sink Connector<\/h2>\n<p>As explained before, the SINK connector pulls data from Kafka topics directly INTO GridDB. Keep that in mind to understand how this section works and then remember it again for the SOURCE Connector section. For this portion, we will go through and do a batch ingestion and then showcase a live ingestion using the SINK.<\/p>\n<h3>Batch Ingestion (Sink Connector)<\/h3>\n<p>Now with the sink connector running, let&#8217;s take a look at batch usage. First, let&#8217;s get some data going.<\/p>\n<h4>Setting Up Our Simulated Sensor Data<\/h4>\n<p>Because Kafka operates with its topics as its data payloads, we have prepared some sample topics to really ensure you can grasp and understand what is going on. So before we move on, let&#8217;s create some topics using the included script and .txt file.<\/p>\n<h4>Creating Kafka Topics (Using Script)<\/h4>\n<p>To use the included files for this part of the project, please copy the <code>script_sink.sh<\/code> and <code>simulate_sensor.txt<\/code> from the Blogs folder to the kafka root folder.<\/p>\n<p>In a real world example, our sensors would be individually setting up Kafka topics for our Kafka server to pick up on and send to GridDB, but because this is a demo, we will simply simulate the topic generation portion using a bash script and a <code>.txt<\/code> file.<\/p>\n<p>The following is the content of our <code>simulate_sensor.txt<\/code> file:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">2020-07-12 00:01:34.735 device7 0.0028400886071015706 76.0 false 0.005114383400977071 false 0.013274836704851536 19.700000762939453\n2020-07-12 00:02:02.785 device8 0.0029050147565559603 75.80000305175781 false 0.005198697479294309 false 0.013508733329556249 19.700000762939453\n2020-07-12 00:02:11.476 device9 0.0029381156266604295 75.80000305175781 false 0.005241481841731117 false 0.013627521132019194 19.700000762939453\n2020-07-12 00:02:15.289 device10 0.0028400886071015706 76.0 false 0.005114383400977071 false 0.013274836704851536 19.700000762939453\n2020-07-12 00:02:19.641 device7 0.0028400886071015706 76.0 false 0.005114383400977071 false 0.013274836704851536 19.799999237060547\n2020-07-12 00:02:28.818 device8 0.0029050147565559603 75.9000015258789 false 0.005198697479294309 false 0.013508733329556249 19.700000762939453\n2020-07-12 00:02:33.172 device9 0.0028400886071015706 76.0 false 0.005114383400977071 false 0.013274836704851536 19.799999237060547\n2020-07-12 00:02:39.145 device10 0.002872341154862943 76.0 false 0.005156332935627952 false 0.013391176782176004 19.799999237060547\n2020-07-12 00:02:47.256 device7 0.0029050147565559603 75.9000015258789 false 0.005198697479294309 false 0.013508733329556249 19.700000762939453<\/code><\/pre>\n<\/div>\n<p>This is the data we aim to publish to GridDB.<\/p>\n<p>Next let&#8217;s take a look at the content of bash script called <code>script_sink.sh<\/code><\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">#!\/bin\/bash\n\nfunction echo_payload {\n    echo '{\"payload\": {\"ts\": \"'$1 $2'\",\"sensor\": \"'$3'\",\"co\": '$4',\"humidity\": '$5',\"light\": \"'$6'\",\"lpg\": '$7',\"motion\": \"'$8'\",\"smoke\": '$9',\"temp\": '${10}'},\"schema\": {\"fields\": [{\"field\": \"ts\",\"optional\": false,\"type\": \"string\"},{\"field\": \"sensor\",\"optional\": false,\"type\": \"string\"},{\"field\": \"co\",\"optional\": false,\"type\": \"double\"},{\"field\": \"humidity\",\"optional\": false,\"type\": \"double\"},{\"field\": \"light\",\"optional\": false,\"type\": \"boolean\"},{\"field\": \"lpg\",\"optional\": false,\"type\": \"double\"},{\"field\": \"motion\",\"optional\": false,\"type\": \"boolean\"},{\"field\": \"smoke\",\"optional\": false,\"type\": \"double\"},{\"field\": \"temp\",\"optional\": false,\"type\": \"double\"}],\"name\": \"iot\",\"optional\": false,\"type\": \"struct\"}}'\n}\n\nTOPICS=()\n\nfor file in `find $1 -name *simulate_sensor.txt` ; do\n    echo $file\n    head -10 $file |while read -r line ; do\n        SENSOR=`echo ${line} | awk '{ print $3 }'`\n        if [[ ! \" ${TOPICS[@]} \" =~ \" ${SENSOR} \" ]]; then\n            echo Creating topic ${SENSOR}\n            kafka-topics.sh --bootstrap-server 127.0.0.1:9092 --create --topic  ${SENSOR} 2&>1 \/dev\/null\n            TOPICS+=(${SENSOR})\n        fi\n        echo_payload ${line} | kafka-console-producer.sh --topic ${SENSOR} --bootstrap-server localhost:9092\n    done\ndone<\/code><\/pre>\n<\/div>\n<p>This script will read in our raw data text file and generate our topics with our data and send it to the proper kafka process.<\/p>\n<p>So essentially with one step we are creating the topics (device7, device8, device9, device10) and then also sending some payloads of data into them to play around with them.<\/p>\n<p>Now add the proper permissions to the script file and then we can run the script to feed into our kafka process. This will allow the data to be queued up and available for ingesting once the GridDB sink connector becomes available.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ chmod +x script_sink.sh\n$ .\/script_sink.sh<\/code><\/pre>\n<\/div>\n<pre><code>Creating topic device7\nCreating topic device8\nCreating topic device9\nCreating topic device10\n<\/code><\/pre>\n<p>When using Kafka in this manner, we are basically sending payloads to a Kafka topic and letting them build up, and then once Kafka is available (or in this case, we run the Kafka process), it will receive the topics and push it directly to GridDB.<\/p>\n<p>From the large amounts of output this command will generate, you should be able to see something resembling topics being placed into GridDB:<\/p>\n<pre><code>Put records to GridDB with number records 9 (com.github.griddb.kafka.connect.sink.GriddbSinkTask:54)\n<\/code><\/pre>\n<p>A small tip: if a topic ends up malformed and does not allow you to fix it, you can delete a topic to restart the process:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic device7<\/code><\/pre>\n<\/div>\n<h4>Single Usage (Sink Connector)<\/h4>\n<p>Next, we will try sending off payloads after topic creation and after we get our GridDB sink running. The goal will be showcasing live payloads being inserted into GridDB. So leave the Sink running and let&#8217;s try to create a payload to send.<\/p>\n<p>First, if your <code>\/griddb-kafka-connect\/config\/griddb-sink.properties<\/code> matches mine (ie you are using <em>explicit<\/em> container names in the topic section) you will need to update the topic portion. If you wish to add let&#8217;s say device30 as a topic, you will need to include that into your sink config file and then re-run the connector.<\/p>\n<p>Once your connector knows to look out for the new topic we intend to make, let&#8217;s actually run the command to make the topic.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">kafka-console-producer.sh --topic device30 --bootstrap-server 127.0.0.1:9092<\/code><\/pre>\n<\/div>\n<p>And then the producer will sit there and listen for new payloads to send. Now we can send a payload and check with our running GridDB Sink to see if it receives the data:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">> { \"payload\": { \"ts\": \"2022-07-12 08:01:34.126\", \"sensor\": \"device8\", \"co\": 0.0028400886071015706, \"humidity\": 76.0, \"light\": \"false\", \"lpg\": 0.005114383400977071, \"motion\": \"false\", \"smoke\": 0.013274836704851536, \"temp\": 19.700000762939453 }, \"schema\": { \"fields\": [ { \"field\": \"ts\", \"optional\": false, \"type\": \"string\" }, { \"field\": \"sensor\", \"optional\": false, \"type\": \"string\" }, { \"field\": \"co\", \"optional\": false, \"type\": \"double\" }, { \"field\": \"humidity\", \"optional\": false, \"type\": \"double\" }, { \"field\": \"light\", \"optional\": false, \"type\": \"boolean\" }, { \"field\": \"lpg\", \"optional\": false, \"type\": \"double\" }, { \"field\": \"motion\", \"optional\": false, \"type\": \"boolean\" }, { \"field\": \"smoke\", \"optional\": false, \"type\": \"double\" }, { \"field\": \"temp\", \"optional\": false, \"type\": \"double\" } ], \"name\": \"iot\", \"optional\": false, \"type\": \"struct\" } }<\/code><\/pre>\n<\/div>\n<p>If you send this, in the running GridDB Sink, it should receive the change to the topic and register it directly to GridDB:<\/p>\n<pre><code>[2022-11-18 17:40:07,168] INFO [griddb-kafka-sink|task-0] Put 1 record to buffer of container device30 (com.github.griddb.kafka.connect.sink.GriddbBufferedRecords:75)\n[2022-11-18 17:40:07,169] INFO [griddb-kafka-sink|task-0] Get Container info of container device30 (com.github.griddb.kafka.connect.dialect.GriddbDatabaseDialect:130)\n[2022-11-18 17:40:07,201] INFO [griddb-kafka-sink|task-0] Get Container info of container device30 (com.github.griddb.kafka.connect.dialect.GriddbDatabaseDialect:130)\n<\/code><\/pre>\n<h2>Running the GridDB Source Connector<\/h2>\n<p>The Source Connector will do the opposite of the Sink Connector &#8212; it will pull data from GridDB and deliver them into our Kafka Topics. For this demo, we already have three rather large GridDB containers (device1, device2, device3) which contain data from <a href=\"https:\/\/www.kaggle.com\/datasets\/garystafford\/environmental-sensor-data-132k\">Kaggle<\/a> which have been imported from PostgreSQL; you can ingest this data if you follow along in our <a href=\"https:\/\/griddb.net\/en\/blog\/using-the-griddb-import-export-tools-to-migrate-from-postgresql-to-griddb\/\">previous blog<\/a>.<\/p>\n<p>If you remember, when we were editing our Source Connector config file, we explicitly stated we wanted to take from those specific containers. The idea is that once we run the Source Connector, Kafka will pull in all relevant data from our GridDB Database into topics.<\/p>\n<h4>Batch Usage (Source Connector)<\/h4>\n<p>If successful, you will see something like this in the output after running this:<\/p>\n<pre><code>[2022-09-23 18:51:44,262] INFO [griddb-kafka-source|task-0] Get Container info of container device1 (com.github.griddb.kafka.connect.dialect.GriddbDatabaseDialect:130)\n[2022-09-23 18:51:44,276] INFO [griddb-kafka-source|task-0] Get Container info of container device2 (com.github.griddb.kafka.connect.dialect.GriddbDatabaseDialect:130)\n[2022-09-23 18:51:44,277] INFO [griddb-kafka-source|task-0] Get Container info of container device3 (com.github.griddb.kafka.connect.dialect.GriddbDatabaseDialect:130)\n<\/code><\/pre>\n<p>We will verify in the READING section, but our Kafka topics should now have these containers and their contents in our Kafka. This is essentially the extent of batch processing using the source; Kafka simply pulls directly from our database.<\/p>\n<h3>Live Ingestion (Source Connector)<\/h3>\n<p>Now let&#8217;s try the live version of what we did above &#8212; leaving the process running and inserting data into GridDB to see if the topic updates it.<\/p>\n<p>Let&#8217;s leave the terminal running which reads directly from the Kafka topics. And now we can try inputting data directly into the container which is being read by the console consumer and you can see it live update. So, to insert data into your container, you can use a Python script or the shell.<\/p>\n<p>With python:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">import griddb_python as griddb\nfrom datetime import datetime, timedelta\n\nfactory = griddb.StoreFactory.get_instance()\nDB_HOST = \"127.0.0.1:10001\"\nDB_CLUSTER = \"myCluster\"\nDB_USER = \"admin\"\nDB_PASS = \"admin\"\n\ntry:\n    # (1) Connect to GridDB\n    # Fixed list method\n    gridstore = factory.get_store(\n        notification_member=DB_HOST, cluster_name=DB_CLUSTER, username=DB_USER, password=DB_PASS)\n\n    # (2) Create a timeseries container - define the schema\n    conInfo = griddb.ContainerInfo(name=\"device4\",\n                                   column_info_list=[[\"ts\", griddb.Type.TIMESTAMP],\n                                                     [\"co\", griddb.Type.DOUBLE],\n                                                     [\"humidity\", griddb.Type.DOUBLE],\n                                                     [\"light\", griddb.Type.BOOL],\n                                                     [\"lpg\", griddb.Type.DOUBLE],\n                                                     [\"motion\", griddb.Type.BOOL],\n                                                     [\"smoke\", griddb.Type.DOUBLE],\n                                                     [\"temperature\", griddb.Type.DOUBLE]],\n                                   type=griddb.ContainerType.TIME_SERIES)\n    # Create the container\n    ts = gridstore.put_container(conInfo)\n    print(conInfo.name, \"container succesfully created\")\n\n    now = datetime.utcnow()\n\n    device4 = gridstore.get_container(\"device4\")\n    if device4 == None:\n        print(\"ERROR Container not found.\")\n\n    device4.put([now, 0.004978, 51.0, True, 0.00764837, True, 0.0204566, 55.2])\n\nexcept griddb.GSException as e:\n    for i in range(e.get_error_stack_size()):\n        print(\"[\", i, \"]\")\n        print(e.get_error_code(i))\n        print(e.get_location(i))\n        print(e.get_message(i))<\/code><\/pre>\n<\/div>\n<p>Once you insert the data, it will live load:<\/p>\n<pre><code>{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int64\",\"optional\":false,\"name\":\"org.apache.kafka.connect.data.Timestamp\",\"version\":1,\"field\":\"ts\"},{\"type\":\"double\",\"optional\":true,\"field\":\"co\"},{\"type\":\"double\",\"optional\":true,\"field\":\"humidity\"},{\"type\":\"boolean\",\"optional\":true,\"field\":\"light\"},{\"type\":\"double\",\"optional\":true,\"field\":\"lpg\"},{\"type\":\"boolean\",\"optional\":true,\"field\":\"motion\"},{\"type\":\"double\",\"optional\":true,\"field\":\"smoke\"},{\"type\":\"double\",\"optional\":true,\"field\":\"temperature\"}],\"optional\":false,\"name\":\"device4\"},\"payload\":{\"ts\":1664308679012,\"co\":0.004978,\"humidity\":51.0,\"light\":true,\"lpg\":0.00764837,\"motion\":true,\"smoke\":0.0204566,\"temperature\":55.2}}\n<\/code><\/pre>\n<p>Alternatively you could use the GridDB CLI:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ gs_sh\n  gs> putrow device4 2022-09-30T12:30:01.234Z 0.003551 22.0 False 0.00754352 False 0.0232432 33.3<\/code><\/pre>\n<\/div>\n<pre><code>{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int64\",\"optional\":false,\"name\":\"org.apache.kafka.connect.data.Timestamp\",\"version\":1,\"field\":\"ts\"},{\"type\":\"double\",\"optional\":true,\"field\":\"co\"},{\"type\":\"double\",\"optional\":true,\"field\":\"humidity\"},{\"type\":\"boolean\",\"optional\":true,\"field\":\"light\"},{\"type\":\"double\",\"optional\":true,\"field\":\"lpg\"},{\"type\":\"boolean\",\"optional\":true,\"field\":\"motion\"},{\"type\":\"double\",\"optional\":true,\"field\":\"smoke\"},{\"type\":\"double\",\"optional\":true,\"field\":\"temperature\"}],\"optional\":false,\"name\":\"device4\"},\"payload\":{\"ts\":1664308679229,\"co\":0.003551,\"humidity\":22.0,\"light\":false,\"lpg\":0.00754352,\"motion\":false,\"smoke\":0.0232432,\"temperature\":34.3}}\n<\/code><\/pre>\n<h2>Reading Data<\/h2>\n<p>Next, let&#8217;s take a look at reading the data that is being moved.<\/p>\n<h3>Batch Reading from GridDB<\/h3>\n<p>Let&#8217;s try reading the data pulled from GridDB into our Kafka topics (via the Source connector)<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ kafka-topics.sh --list --bootstrap-server localhost:9092<\/code><\/pre>\n<\/div>\n<pre><code>device1\ndevice2\ndevice3\ndevice4\n<\/code><\/pre>\n<p>And now let&#8217;s actually take a look at the data:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ kafka-console-consumer.sh --topic device4 --from-beginning --bootstrap-server localhost:9092<\/code><\/pre>\n<\/div>\n<p>And then this was the output:<\/p>\n<pre><code>{\"schema\":{\"type\":\"struct\",\"fields\":[{\"type\":\"int64\",\"optional\":false,\"name\":\"org.apache.kafka.connect.data.Timestamp\",\"version\":1,\"field\":\"ts\"},{\"type\":\"double\",\"optional\":true,\"field\":\"co\"},{\"type\":\"double\",\"optional\":true,\"field\":\"humidity\"},{\"type\":\"boolean\",\"optional\":true,\"field\":\"light\"},{\"type\":\"double\",\"optional\":true,\"field\":\"lpg\"},{\"type\":\"boolean\",\"optional\":true,\"field\":\"motion\"},{\"type\":\"double\",\"optional\":true,\"field\":\"smoke\"},{\"type\":\"double\",\"optional\":true,\"field\":\"temp\"}],\"optional\":false,\"name\":\"device2\"},\"payload\":{\"ts\":1594615046659,\"co\":0.004940912471056381,\"humidity\":75.5,\"light\":false,\"lpg\":0.007634034459861942,\"motion\":false,\"smoke\":0.020363432603022532,\"temp\":19.399999618530273}}\n\nProcessed a total of 2 messages\n<\/code><\/pre>\n<h3>Querying our Data from GridDB<\/h3>\n<p>We can also try reading the data inserted into GridDB from our Kafka topics. We can do by directly querying our GridDB server using the <a href=\"https:\/\/github.com\/griddb\/cli\">GridDB CLI<\/a> like so:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">$ sudo su gsadm\n$ gs_sh\ngs[public]> sql select * from device7;<\/code><\/pre>\n<\/div>\n<pre><code>3 results. (25 ms)\ngs[public]&gt; get\nts,sensor,co,humidity,light,lpg,motion,smoke,temp\n2020-07-12T00:01:34.735Z,device7,0.0028400886071015706,76.0,false,0.005114383400977071,false,0.013274836704851536,19.700000762939453\n2020-07-12T00:02:19.641Z,device7,0.0028400886071015706,76.0,false,0.005114383400977071,false,0.013274836704851536,19.799999237060547\n2020-07-12T00:02:47.256Z,device7,0.0029050147565559603,75.9000015258789,false,0.005198697479294309,false,0.013508733329556249,19.700000762939453\nThe 3 results had been acquired.\n<\/code><\/pre>\n<p>As an example here, we are checking to see if our Kafka was able to successfully insert data from Kafka into GridDB. At this point, each of our topics (deviceX ) should have an equivalent GridDB container.<\/p>\n<h2>Conclusion<\/h2>\n<p>And with that, we have successfully used Kafka with GridDB, which is immensely useful for getting real-time data from your devices directly into your GridDB database and vice-versa.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Apache Kafka is a tool which allows for &#8220;real-time processing of record streams&#8221;. What this means is that you can send real-time data from your sensors or various other pieces of tools directly into something else, and in this case, directly into GridDB. You can also do the opposite: you can stream data from your [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":27911,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-46733","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Stream Data with GridDB and Kafka | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"Apache Kafka is a tool which allows for &quot;real-time processing of record streams&quot;. What this means is that you can send real-time data from your sensors or\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Stream Data with GridDB and Kafka | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"Apache Kafka is a tool which allows for &quot;real-time processing of record streams&quot;. What this means is that you can send real-time data from your sensors or\" \/>\n<meta property=\"og:url\" content=\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2023-01-27T08:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-13T20:56:19+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/griddb.net\/wp-content\/uploads\/2021\/11\/kafka.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1160\" \/>\n\t<meta property=\"og:image:height\" content=\"653\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Israel\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Israel\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"18 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\"},\"author\":{\"name\":\"Israel\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\"},\"headline\":\"Stream Data with GridDB and Kafka\",\"datePublished\":\"2023-01-27T08:00:00+00:00\",\"dateModified\":\"2025-11-13T20:56:19+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\"},\"wordCount\":2410,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\",\"url\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\",\"name\":\"Stream Data with GridDB and Kafka | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"datePublished\":\"2023-01-27T08:00:00+00:00\",\"dateModified\":\"2025-11-13T20:56:19+00:00\",\"description\":\"Apache Kafka is a tool which allows for \\\"real-time processing of record streams\\\". What this means is that you can send real-time data from your sensors or\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"contentUrl\":\"\/wp-content\/uploads\/2021\/11\/kafka.png\",\"width\":1160,\"height\":653},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website\",\"url\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740\",\"name\":\"Israel\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g\",\"caption\":\"Israel\"},\"url\":\"https:\/\/griddb.net\/en\/author\/israel\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Stream Data with GridDB and Kafka | GridDB: Open Source Time Series Database for IoT","description":"Apache Kafka is a tool which allows for \"real-time processing of record streams\". What this means is that you can send real-time data from your sensors or","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/","og_locale":"en_US","og_type":"article","og_title":"Stream Data with GridDB and Kafka | GridDB: Open Source Time Series Database for IoT","og_description":"Apache Kafka is a tool which allows for \"real-time processing of record streams\". What this means is that you can send real-time data from your sensors or","og_url":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2023-01-27T08:00:00+00:00","article_modified_time":"2025-11-13T20:56:19+00:00","og_image":[{"width":1160,"height":653,"url":"https:\/\/griddb.net\/wp-content\/uploads\/2021\/11\/kafka.png","type":"image\/png"}],"author":"Israel","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"Israel","Est. reading time":"18 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#article","isPartOf":{"@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/"},"author":{"name":"Israel","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740"},"headline":"Stream Data with GridDB and Kafka","datePublished":"2023-01-27T08:00:00+00:00","dateModified":"2025-11-13T20:56:19+00:00","mainEntityOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/"},"wordCount":2410,"commentCount":0,"publisher":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2021\/11\/kafka.png","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/","url":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/","name":"Stream Data with GridDB and Kafka | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2021\/11\/kafka.png","datePublished":"2023-01-27T08:00:00+00:00","dateModified":"2025-11-13T20:56:19+00:00","description":"Apache Kafka is a tool which allows for \"real-time processing of record streams\". What this means is that you can send real-time data from your sensors or","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb.net\/en\/blog\/stream-data-with-griddb-and-kafka\/#primaryimage","url":"\/wp-content\/uploads\/2021\/11\/kafka.png","contentUrl":"\/wp-content\/uploads\/2021\/11\/kafka.png","width":1160,"height":653},{"@type":"WebSite","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#website","url":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#organization","name":"Fixstars","url":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/c8a430e7156a9e10af73b1fbb46c2740","name":"Israel","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/4df8cfc155402a2928d11f80b0220037b8bd26c4f1b19c4598d826e0306e6307?s=96&d=mm&r=g","caption":"Israel"},"url":"https:\/\/griddb.net\/en\/author\/israel\/"}]}},"_links":{"self":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts\/46733","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/comments?post=46733"}],"version-history":[{"count":1,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts\/46733\/revisions"}],"predecessor-version":[{"id":51404,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts\/46733\/revisions\/51404"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/media\/27911"}],"wp:attachment":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/media?parent=46733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/categories?post=46733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/tags?post=46733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}