Blog

Color Palettes Extraction Using Webcam and AI

In this tutorial, we will explore how to extract color palettes from images captured via a webcam using Node.js, GridDB, and OpenAI. By leveraging Node.js for server-side scripting, GridDB for efficient data storage, and OpenAI for advanced image processing, we will create a seamless pipeline to capture images, analyze them, and generate dynamic color palettes. This guide will walk you through setting up your environment, capturing images from your webcam, and using AI to extract and store color data effectively. Prerequisites Before we dive in, ensure the following software is installed on your machine: Node.js GridDB OpenAI API access Browser with a webcam access Running The Project Clone the source code from this GitHub repository. git clone https://github.com/griddbnet/Blogs.git –branch color-extraction This project also needs to install Node.js and GridDB for this project to run. If the software requirements are installed, change the directory to the apps project directory and then install all the dependencies: cd color-detection-openai cd apps npm install Create a .env file and copy all environment variables from the .env.example file. We need an OpenAI key for this project, please look in the “Getting Started” section to get started. OPENAI_API_KEY=sk-proj-secret VITE_APP_URL=http://localhost:3000 You can change the VITE_APP_URL to your needs and then run the project by running this command: npm run start:build Go to the browser and enter the URL set on VITE_APP_URL, which in this case is http://localhost:3000. Make sure to enable the webcam in your browser, then click the Capture button to take a photo using the web camera. Setting Up the Environment 1. Installing Node.js This project will run on the Node.js platform. You need to install it from here. For this project, we will use the nvm package manager and Node.js v16.20.2 LTS version. # installs nvm (Node Version Manager) curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash # download and install Node.js nvm install 16 # verifies the right Node.js version is in the environment node -v # should print `v16.20.2` # verifies the right NPM version is in the environment npm -v # should print `8.19.4“ To connect Node.js and GridDB database, we need the gridb-node-api npm package which is a Node.js binding developed using GridDB C Client and Node addon API. 2. Setting Up GridDB We will use the GridDB database to save recipes and it’s nutrition analysis. Please look at the guide for detailed installation. We will use Ubuntu 20.04 LTS here. Run GridDB and check if the service is running. Use this command: sudo systemctl status gridstore If not running try to run the database with this command: sudo systemctl start gridstore 3. Get The OpenAI Key To get the OpenAI key, create a project first and then create a key. The important thing is you should save the OpenAI key on the .env file and ensure not to include it in version control by adding it to the .gitignore. OPENAI_API_KEY=sk-proj-secret Another crucial factor is to select models that are accessible for the project. For this project, we will utilize gpt-4o models for image recognition and extracting colors from the image. The AI model’s response is non-deterministic, which means sometimes the response is not exactly what we want. By default this project uses the gpt-4o-mini model, in case the response is not quite right, you can change it to a more powerful model, such as the gpt-4o model. Capturing Images with MediaStream To capture images, we can use MediaStream API. It is an API related to WebRTC which provides support for streaming audio and video data. Before capturing an image from the web camera, we first need to initialize the web camera: const initializeWebcam = () => { navigator.mediaDevices.getUserMedia({ video: true }) .then(stream => { videoRef.current.srcObject = stream }) .catch(error => { console.error(‘getUserMedia error:’, error) }) } And then to capture the image from the video, we can use the drawImage() function: const captureImage = () => { const context = canvasRef.current.getContext(‘2d’) context.drawImage(videoRef.current, 0, 0, canvasRef.current.width, canvasRef.current.height) const base64Image = canvasRef.current.toDataURL(‘image/jpeg’) processImage(base64Image) } The drawImage() function will capture the current frame from the video stream and render it onto the canvas. This allows for further image data manipulation, processing, or conversion. In the provided code, the drawn image on the canvas is converted to a base64-encoded string using the toDataURL() function, which is then sent to a server for processing. Processing Images with OpenAI The image processing on the server is quite simple. The web app will send a base64-encoded image to the /process-image route. app.post(‘/process-image’, async (req, res) => { const { image } = req.body if (!image) { return res.status(400).json({ error: ‘No image provided’ }) } // eslint-disable-next-line no-undef const result = await getColorAnalysis(image) res.json(result.choices[0]) }) Then to get the color analysis from the image, we will use the gpt-4o-mini model from OpenAI. The getColorAnalysis() function will take the base64-encoded image and then process it. async function getColorAnalysis(base64Image) { const response = await openai.chat.completions.create({ model: “gpt-4o-mini-2024-07-18”, messages: [{ role: “system”, content: systemPrompt }, { role: “user”, content: [{ type: “image_url”, image_url: { url: base64Image } }, { type: “text”, text: userPrompt } ] } ], temperature: 0.51, max_tokens: 3000, top_p: 1, frequency_penalty: 0, presence_penalty: 0, }); return response; } OpenAI’s model response is determined by the prompt given. For a color analysis, use the specific prompt: const userPrompt = “Extract the seven most prominent colors from the provided image. Use color clustering techniques to identify and present these colors in Hex values. Answer with the raw array values ONLY. DO NOT FORMAT IT.”; We can get a better result by adding a system prompt to the OpenAI model. This system prompt behaves like a command for the OpenAI model to behave for a specific persona, which is a professional color analyst. const systemPrompt = `You are an AI specialized in colorimetry, the science and technology of color detection and measurement. You possess deep knowledge of the principles of color science, including color spaces, color matching functions, and the use of devices such as spectrophotometers and colorimeters. You provide accurate and detailed analyses of color properties, offer solutions for color consistency issues, and assist in applications ranging from imaging and printing to manufacturing and display technologies. Use your expertise to answer questions, solve problems, and provide color detection and measurement guidance.`; The prompt can also specify the model format response. In this project, we want the array of colors from the image colors analysis. The OpenAI model response should be in the form: [‘#2A2C9B’, ‘#F08A7D’, ‘#8E5DB2’, ‘#E8A1A3’, ‘#4D3B9E’, ‘#7F3C8F’, ‘#B57AB3’] Where each item in the array is a color in the hex format. Storing Data in GridDB We utilize the GridDB database for data storage. Here are the main data fields along with their descriptions: Column Name Type Description id INTEGER Unique identifier for each row. picture BLOB Base64 image encoding. colors STRING List of colors in Hex format. The saveData() function is a wrapper for the insert() function in the libs\griddb.cjs file. It is responsible for saving data into the database. Only two main fields are saved in the database. export async function saveData({ image, genColors }) { const id = generateRandomID() const picture = Buffer(image) const colors = String(genColors) const packetInfo = [parseInt(id), picture, colors] const saveStatus = await GridDB.insert(packetInfo, collectionDb) return saveStatus } The save data function will be executed on the server route /process-image after the color analysis of the image. Every time a user captures an image, it will be automatically sent to the server and the resulting data will be saved to the database. app.post(‘/process-image’, async (req, res) => { const { image } = req.body if (!image) { return res.status(400).json({ error: ‘No image provided’ }) } // eslint-disable-next-line no-undef const result = await getColorAnalysis(image) const colorsArray = result.choices[0].message.content // save data to the database const saveStatus = await saveData(image, colorsArray) console.log(saveStatus) res.json(result.choices[0]) }) Building User Interfaces The UI comprises two primary user interfaces: image capture and color palettes. React.js is utilized in this project to improve component management. Image Capture The image capture user interface is simply an HTML5 video view. This is the snippet code that shows the main HTML tags used: // WebcamContainer.js const WebcamContainer = ({ onColorsExtracted }) => { const captureImage = () => { const context = canvasRef.current.getContext(‘2d’) context.drawImage(videoRef.current, 0, 0, canvasRef.current.width, canvasRef.current.height) const base64Image = canvasRef.current.toDataURL(‘image/jpeg’) processImage(base64Image) } // code processing here return ( Capture Switch Camera ) } export default WebcamContainer When you click the Capture button the captureImage() function will capture the image on a specific video frame and send it for further processing. The full source code for the image capture user interface is in the WebcamContainer.jsx file. Color Palettes The color palette UI can be created using a series of dynamically colored svg rectangles. // eslint-disable-next-line react/prop-types const ColorRectangles = ({ colors }) => { return ( {colors.map((color, index) => ( ))} ) } export default ColorRectangles For example, if the colors array data is: [‘#4B8B3B’, ‘#C4B600’, ‘#7D7D7D’, ‘#E3D4A0’, ‘#2E2E2E’, ‘#F6F1D3’, ‘#A6A6A6’] Then the colors will be rendered on the web as the screenshot below: Server Routes The are four server routes to handle the client request. POST /process-image Process an image for color analysis. GET /colors The /colors route will retrieve all data from the database. GET /colors/:id Retrieve stored color data based on the ID. The data response for the picture field is a Buffer type so to process it in the browser, we need to change it into a readable format first. /** * Extracting the buffer data * Assume the result data name is jsonData */ const bufferData = jsonData[0][1].data; // Converting buffer data to Uint8Array object const uint8Array = new Uint8Array(bufferData); // Converting Uint8Array to UTF-8 string const utf8String = new TextDecoder(‘utf-8’).decode(uint8Array); console.log(utf8String); GET /delete/:id Delete specific data in the database by its ID. For example, to delete data with id 8900: http://localhost:3000/delete/8900 Tools like Postman can be used to test APIs. SQL Data Test To check the data in the database, we can use CLI commands. In this project, we use Ubuntu 20.04 LTS. Login to the GridDB user: sudo su gsadm and then type this command to enter the GridDB shell: gs_sh In this shell, we can list all containers and run any SQL queries. gs[public]> showcontainer gs[public]> select * from ColorPalettes; 1 results (38ms) gs[public]> delete from ColorPalettes where

More
Pairing GridDB Cloud with Grafana Cloud

With the release of a completely free GridDB Cloud, we wanted to pair its free service with Grafana Cloud, another free Cloud-based service which can get you up and running in seconds. For this article, we will walk through the steps of how to display time-series data from your GridDB Cloud shared instance to Grafana Cloud. If you are unfamiliar with GridDB Cloud, you can read our quick start guide here: GridDB Cloud Quick Start Guide — that article will teach you how to sign up, how to begin using GridDB Cloud and more of the basics: who, what, when, where, why. If you are also unfamiliar with Grafana, you can read about its capabilities and strength from their docs: https://grafana.com/docs/grafana/latest/introduction/. If you are curious as to how Grafana can enhance your GridDB experience, I will point you to a previous article we have written here: (Creating Heatmaps of Geometry Data using Grafana & GridDB)[https://griddb.net/en/blog/creating-heatmaps-grafana/]. Essentially, Grafana’s advanced visualization tools allow for some creative ways of analyzing and peering into your data. This article’s goal is simple: showcase how to use the cloud offerings from both vendors to display some GridDB Data in Grafana Cloud. We will go through this process step-by-step and explain any idiosyncrasies along the way. Let’s get started! Implementation First, here’s a link to the Grafana dashboard that we will be using for this article: https://imru.grafana.net/public-dashboards/8a9f9f8ed9d34582aecca867a50c9613. Source code can be found here: $ git clone https://github.com/griddbnet/Blogs.git –branch 4_grafana_cloud Prereqs To follow along, you will need to have access to a free account of both GridDB Cloud and Grafana Cloud. Technical Overview To query our GridDB Cloud data from Grafana, we will be sending HTTP Requests directly from Grafana to our GridDB Cloud. And indeed, any sort of interactions we want to make with our free GridDB Cloud instance will be done via Web API interface; this topic is covered in the quick start linked above, as well as in this article: GridDB WebAPI. You can also of course check out the official docs: GridDB_Web_API_Reference. The specifics of how to form our query and how to create our allowlist to get around GridDB’s firewall will be the subject of our next few sections. Adding Grafana’s IP Addresses to GridDB’s Allowlist In order for our Grafana Cloud instance to send HTTP Requests which are accepted as “safe” by our GridDB Cloud, we need to be able to add all potential IP Addresses to our GridDB Cloud instance. Browsing through the Grafana documentation, we see that they have these lists readily available for these exact scenarios: https://grafana.com/docs/grafana-cloud/account-management/allow-list/. The list we need is called ‘Hosted Grafana’, and if you take a quick peek, you’ll see the list is easily over 100 lines, so then how do we efficiently add all of these to our GridDB Cloud management portal? Well, luckily we have already encountered this scenario in our previous article: Create a Serverless IoT Hub using GridDB Cloud and Microsoft Azure. To solve the issue, we wrote a simple bash script which will take the .txt file as the input and add each ip address to the allowlist of GridDB Cloud’s online portal. Source code and instructions found in that blog in the “#whitelist” section. Here’s the script: #!/bin/bash file=$1 #EXAMPLE #runCurl() { #curl ‘https://cloud57.griddb.com/mfcloud57/dbaas/web-api/contracts/m01wc1a/access-list’ -X POST -H ‘User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:133.0) Gecko/20100101 Firefox/133.0’ -H ‘Accept: application/json, text/plain, */*’ -H ‘Accept-Language: en-US,en;q=0.5’ -H ‘Accept-Encoding: gzip, deflate, br, zstd’ -H ‘Content-Type: application/json;charset=utf-8’ -H ‘Access-Control-Allow-Origin: *’ -H ‘Authorization: Bearer eyJ0eXAiOiJBY2Nlc3MiLCJIUzI1NiJ9.eyJzdWIiOiJkMTg4NjlhZC1mYjUxLTQwMWMtOWQ0Yy03YzI3MGNkZTBmZDkiLCJleHAiOjE3MzEwMTEyMTMsInJvbGUiOiJBZG1pbiIsInN5c3RlbVR5cGUiOjF9.B1MsV9-Nu8m8mJbsp6dKABjJDBjQDdc9aRLffTlTcVM’ -H ‘Origin: https://cloud5197.griddb.com’ -H ‘Connection: keep-alive’ -H ‘Referer: https://cloud5197.griddb.com/mfcloud5197/portal/’ -H ‘Sec-Fetch-Dest: empty’ -H ‘Sec-Fetch-Mode: cors’ -H ‘Sec-Fetch-Site: same-origin’ -H ‘Priority: u=0’ -H ‘Pragma: no-cache’ -H ‘Cache-Control: no-cache’ –data-raw $1 #} runCurl() { $1 } while IFS= read -r line; do for ip in ${line//,/ }; do echo “Whitelisting IP Address: $ip” runCurl $ip done done < "$file" Querying GridDB Cloud from Grafana Cloud Out of the box, Grafana does have a method of sending HTTP Requests to your services, but from what I could tell, they're geared towards specific services (ie. Prometheus) and are limited to HTTP GET Requests. In our case, all of our requests to GridDB Cloud require POST requests, so we needed to find a solution for this: enter the Infinity plugin. Once installed, we will be able to add it as a datasource. Using Infinity as a Data Source From within your Grafana Cloud instance, select Connections --> Data Sources. If you installed the Infinity data source properly, it should show up in this section as an option — please select it. From here, we can add all of our pertinent GridDB Cloud information to forge our connection — we will need to add our basic authentication, and allow our portal’s hostname in the allowed hosts list. Once added, lastly click on the health check section and add your Web API + “/checkConnection” (ie. https://cloud5197.griddb.com:443/griddb/v2/gs_clustermfcloud5197/dbs/ZV8YUlQ8/checkConnection) as a simple sanity check AND health check. Hit Save & Test. We should be able to query our GridDB Cloud Database now! Ingesting Usable Time Series Data Before we query our data, let’s first ensure that we have working data in our GridDB Cloud. If you are following along and have just now made a new account, you can follow our quick start guide to ingest an IoT sample data that can be found on Kaggle. Here is a direct link to the section in the guide: https://griddb.net/en/blog/griddb-cloud-quick-start-guide/#ingest. Here, we are ingesting a csv file and calling the container device1. Forming Our HTTP Requests Now that we can communicate between services, let’s get the data we want. From the Grafana Cloud menu, select Dashboards and then select “new” in the top right corner and then finally Add visualization. From here, select Infinity and you will now have a blank graph and a place to put your query. And now for some options: Type: JSON Parser: Backend Source: URL Format: Data Frame Method: POST Note: Here is a screenshot of the entire query we will form in the ensuing lines of text (screenshot will be displayed again at the end once you can better understand what all of the words mean) The parser and format being what they are allows for us to properly name and label the data being received from GridDB Cloud because of the unusual way in which responds to the requestor with data. Instead of sending back the rows of data in JSON format (which, to be fair, if you’ve got a thousand rows, it’s a lot of unnecessary bloat), GridDB Cloud sends back the information as a JSON file, but the actual rows of data are in array form, with the schema being listed under another JSON key name (columns). As for the URL, you can take a look at the links above about how to form your Web API Request, but here are two we will be using: SQL: https://cloud5197.griddb.com/griddb/v2/gs_clustermfcloud5197/dbs/ZV8YUlQ8/sql/dml/query API: https://cloud5197.griddb.com:443/griddb/v2/gs_clustermfcloud5197/dbs/ZV8YUlQ8/containers/device1/rows In each case, we will have different selectors for our returned data, as well as different body payloads that we will be sending off as a request. First, let’s take a look at the SQL Query. Making a Simple SQL-Select Query First, set your URL to match the query above. The format is as follows: https://[cloud-portal-name].griddb.com/griddb/[clustername]/dbs/[database-name]/sql/dml/query. And then we form our SQL Query within the body of the request. To properly parse this, we set the format as data frame, and then under parsing options & Results fields, we type in “results” in the JSONata/rows selector box and then click add columns. To start, we simply want a couple of columns from our dataset, so we can se the columns like so: Selector: 0 #array index as: ts format-as: Time And then we select the column we want to track in our graph, let’s take a look at temperature Selector: 7 as: temperature format-as: Number Lastly, because it’s a POST request, we must send off something within the body of our request, but in this case, it’s going to be a SQL query. Right under the Method dropdown menu, there’s a button that says Headers, Body, Request Params. Click this. Fro that sub menu, set the Body Type to Raw and set the Body Content Type to JSON. And in the large text box you can add your actual body — or in our case, our SQL query: [ {“stmt” : “select * from device1 LIMIT 1000”} ] Note: I highly recommend using a limit on your query, otherwise Grafana may malfunction trying to show all of your data points. In the graph above, click “zoom in on data” if necessary and your data will be displayed! Cool! More SQL Queries (Group By Range, Aggregations) With the advent of GridDB 5.7, you can make some more complex SQL Queries as well and are not limited to SELECT statements. For example, we can use the SQL Group By Range which allows for us to perform a aggregation operations over a given time span. For example, this is what our query looks like: select ts,temp from device1 WHERE ts BETWEEN TIMESTAMP(‘2020-07-12T01:00:25.984Z’) AND TIMESTAMP(‘2020-07-12T01:22:29.050Z’) GROUP BY RANGE (ts) EVERY (1, SECOND) FILL (LINEAR). So we can simply plug this in to our Infinity plugin and see the results (Hint: you may need to change your column selectors and refresh the graph). You can also do other SQL aggregation queries, really there is no limit; you can read more about that in the GridDB SQL docs: https://griddb.org/docs-en/manuals/GridDB_SQL_Reference.html Using the Group By Range feature is excellent for creating dense graphs even if you don’t have a dense enough dataset! API Query We can also skip SQL and just the API to make queries using simple JSON options in our body request. The URL will be built out as follows: https://[cloud-portal-name].griddb.com/griddb/[clustername]/dbs/[database-name]/containers/[container-name]/rows So enter in your URL and, set the Body Content Type to JSON, and the Body content to some combination of the following: { “offset” : 0, “limit” : 100, “condition” : “temp >= 30”, “sort” : “temp desc” } As you can see, you set the condition and sort options for your dataset. The column options remain the same, except the results of the data is now called ‘rows’, so change that option in the Parsing Options & Result Fields section; you can keep the same column selectors as those stay the same. Conclusion Once you are done querying and adding in the data you like, you can of course save your dashboard for future use. And that’s all! If you have followed along, you have now been able to pair GridDB Cloud with Grafana Cloud to display your

More
GridDB v5.7 Web API Changes

GridDB v5.7.0 has released and we would like to go over some of the new features. You can download the new release directly from GitHub and from the Downloads page. In this article, we specifically want to take a closer look at two of the new features: WebAPI changes and SQL Working Memory limit. GridDB Web API Changes in v5.7.0 The GridDB WebAPI allows for you to interact with your GridDB server via HTTP Requests. The major changes in this release are expanded capability of the SQL functionality, as well as a new way to install the package itself. Packaged within this new release are SQL commands which were previously unavailable, including DDL, DML, and DCL commands. On the github page, you will also find .deb and .rpm files for installation, which will also add installing the web api as a service. The following sections’ information can be found on the project’s github page: https://github.com/griddb/webapi/blob/master/GridDB_Web_API_Reference.md Installation Changes As explained above, you can now install the GridDB Web API as a package, meaning it will auto-populate all necessary directories for you and also create a symlink so that you can launch the service using systemctl, similar to how the GridDB server operates. To do so, grab the .deb file from https://github.com/griddb/webapi/releases/tag/5.7.0 and run: $ sudo dpkg -i https://github.com/griddb/webapi/releases/download/5.7.0/griddb-ce-webapi_5.7.0_amd64.deb Once installed, you can make the changes you deem necessary in the following directory: /var/lib/gridstore/webapi/conf. And then to start: $ sudo systemctl start griddb-webapi.service SQL DDL (Data Definition Language) To me, the standout feature for the Web API is the inclusion of SQL DDL Commands. With this, we can now create tables using the Web API with the familar SQL syntax and are no longer required to use the TQL API for creating our tables. And though the previous methods worked just fine before, the benefits of this addition are two-fold: 1, SQL is a widely-known query language and therefore easier to work with, and 2, TQL commands did not allow for manipulating or creating partitioned tables/containers. Now let’s take a look at creating a table using the new SQL DDL Command. The URL path is as follows: /:cluster/dbs/:database/sql/ddl. Here’s a working example: curl –location ‘http://192.168.50.206:8082/griddb/v2/myCluster/dbs/public/sql/ddl’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: ••••••’ \ –data ‘[ {“stmt” : “CREATE TABLE IF NOT EXISTS pyIntPart2 (date TIMESTAMP NOT NULL PRIMARY KEY, value STRING) WITH (expiration_type=’\”PARTITION’\”,expiration_time=10,expiration_time_unit=’\”DAY’\”) PARTITION BY RANGE (date) EVERY (5, DAY);”}, {“stmt” : “ALTER TABLE pyIntPart2 ADD temp STRING”} ]’ Here we are making two statements, one to create a partitioned table with expiry rules (you can read about that here: https://griddb.net/en/blog/griddb-partitioning-and-expiry/, and another to alter that same table and add a new column. Again, this was not possible before because of the nature of partitioned tabled. SQL DML (Data Manipulation Language) With DML commands, we can run SELECT, INSERT, UPDATE, etc commands on our containers. This functionality was partly there prior to the v5.7.0 release, though it was a lot more limited and is no longer recommended to be used. To conduct a DML request, the path is similar to the one above: /:cluster/dbs/:database/sql/dml/query. With this, you can run a SELECT statement to run a lookup of some data from your container. For example curl –location ‘http://192.168.50.206:8082/griddb/v2/myCluster/dbs/public/sql/dml/query’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: ••••••’ \ –data ‘[ {“stmt” : “select AVG(total_points_per_game) from Top_NBA_Playoff_Scorers”} ] ‘ Again, if we wanted to query data from a partitioned table using the TQL method, the command will fail: curl –location ‘http://192.168.50.206:8082/griddb/v2/myCluster/dbs/public/tql’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: ••••••’ \ –data ‘[ {“name” : “pyIntPart2”, “stmt” : “select *”, “columns” : null} ]’ DML Update We can also update rows by using the same rowkey or we can add new ones by using a fresh key. For example: curl –location ‘http://192.168.50.206:8082/griddb/v2/myCluster/dbs/public/sql/dml/update’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: ••••••’ \ –data ‘[ {“stmt” : “INSERT INTO pyIntPart2(date, value, temp) VALUES (NOW(), ‘\”blog_test’\”, ‘\”cold’\”)”} ] ‘ Again, the nice thing about using SQL here instead of the old TQL is that we can use the special GridDB Time Series SQL commands such as NOW(). You can read more about those commands in the docs: https://docs.griddb.net/sqlreference/sql-commands-supported/#time-functions. If we tried to use the NOW() command with the TQL version of an insert, it fails: curl –location –request PUT ‘http://192.168.50.206:8082/griddb/v2/myCluster/dbs/public/containers/pyIntPart2/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: ••••••’ \ –data ‘[ [NOW(), “failure”,”hot”] ]’ This command will fail. SQL DCL (Data Control Language) The SQL Data Control Language is mostly concerned with rights/permissions for your database. This one isn’t as exciting but it can be useful if managing many databases or users. For eexample, if you create a new database and a new user, you can grant that user access to that DB with the Web API. curl –location ‘http://192.168.50.206:8082/griddb/v2/myCluster/dbs/public/sql/dcl’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: ••••••’ \ –data ‘[ {“stmt” : “REVOKE all on testing1 from israel”}, {“stmt” : “GRANT all on testing1 to israel”} ]’ And you can verify this by using the GridDB CLI tool gs[public]> showuser israel Name : israel Type : General User GrantedDB: public testing1

More
Monitoring Air Quality in California using Home Assistant, Raspberry Pi, and GridDB Cloud

Despite humanity’s (lackluster) efforts, climate change has become an omnipresent, undeniable force. Though there are many side effects that come with a rising global average temperature, today I want to focus on wildfires and their affect on the air quality of all surrounding areas; when a 100-acre-fire is burning up a Californian forest, all of that debris and particle matter get kicked up into the atmosphere, becoming potentially dangerous, inhalable matter. Of course, there are various other factors which can (and do) contribute to the quality of the air we breathe. To me, this means that even if there isn’t a wildfire nearby causing spikes in AQI (Air Quality Index), there is still a reason to be informed and aware of what the air quality is like at any given moment. The Project For this article, we were interested in measuring the air quality inside of our homes. Specifically, we wanted to take live readings of our air quality data, save it into persistent storage, and then notify the inhabitants when the air quality grew passed a certain threshold. To accomplish our goal, we sought to integrate GridDB Cloud with the open source smart home solution known as Home Assistant. If you are unfamiliar, Home Assistant is an “Internet of things (IoT) ecosystem-independent integration platform and central control system for smart home devices, with a focus on local control”. Typically, end users install the software onto their home servers to control aspects of their internet-connected physical devices. As an example, one might install Home Assistant to act as their smart hub to control their smart light bulbs, smart robovac, etc. The beauty of Home Assistant is in its versatility and flexibility. For instance, because it’s completely open source and built for tinkerers/developers, users can roll their own solutions for their own specific use cases, and build their own automations. For our case, we were interested in being able to save all of the raw, unfiltered sensor data into GridDB Cloud at a resolution of 1 reading/second, and then using the Home Assistant to query the values needed to do what we want, eventually being able to notify those who live in the same space as the sensor that the air quality is approaching dangerous levels in some tangible way. Project Specifics and the PM1 Particle The idea goes like this, we use a single board computer to connect to a sensor which will capture raw air quality data. We then send that raw data into GridDB cloud at a resolution of 1 particle/second. Our Home Assistant can then query the data and downsample whatever information is necessary to accomplish our goal. To begin, we have connected an air quality sensor — Adafruit PMSA003I Air Quality Breakout — to a raspberry pi. We then use a python script to read and send the sensor readings up to GridDB Cloud every 1 second via HTTP Request. With our data being saved into persistent storage with GridDB Cloud, we can make HTTP Requests to query our dataset with SQL Select statements. In this case, we want to use Home Assistant to query our dataset to alert us of higher than normal particle matters in the air at our locations. We also want to include an easy to read sensor reading right on our Home Assistant dashboard with rolling averages. As a sidenote: an interesting point about this particular sensor is that it can read matter as small as 1 micron (labeled as PM1). These particles are so small that they can penetrate lung tissue and get directly into your bloodstream; and because this particle is so tiny, it’s only detectable by specialized equipment, including the sensor linked above. Because of the lack of readily available sources of PM1 readings, this particle will be the focus of our queries for notifying the home inhabitants of its increasing levels. Project Requirements If you would like to follow along, you will need the following: Free GridDB Cloud Account a. You can read instructions on how to sign up for a free GridDB cloud account from here: GridDB Free Plan. Accompanying blog: GridDB Cloud Quick Start Air Quality sensor Means of connecting the sensor to a computer (single board or otherwise) Once you have set up the necessary hardware and can connect to your GridDB Cloud database, you can grab the source code from here: GitHub to run the simple python script to send sensor readings. Connecting the Hardware In my case, I bought the air quality sensor attached with a STEMMA Connector; to connect it to the Raspberry Pi 4, I bought a STEMMA Hat and a STEMMA wire. If you do not want to purchase a STEMMA hat, you can also solder the pins onto the sensor and use a breadboard to connect to the Raspberry Pi’s GPIO pins. If you go this route, you may need to alter the python script provided by our source code — you can read more about how to physically connect this air quality sensor through their documentation page: Docs. Software Now let’s focus on the software that makes this project go. Python Script for Container Creation and Pushing Data First and foremost, let’s discuss the script which sends the sensor readings as HTTP requests. It is a modified version of the example script provided directly by ada fruit’s documentation. The data structure of the incoming data readings are already laid out in convenient dictionary matter, so we simply need to iterate through and make the values match up with how we lay out our schema on container creation. So first, here’s the script for container creation: import http.client import json conn = http.client.HTTPSConnection(“cloud5197.griddb.com”) payload = json.dumps({ “container_name”: “aqdata”, “container_type”: “TIME_SERIES”, “rowkey”: True, “columns”: [ { “name”: “ts”, “type”: “TIMESTAMP” }, { “name”: “pm1”, “type”: “DOUBLE” }, { “name”: “pm25”, “type”: “DOUBLE” }, { “name”: “pm10”, “type”: “DOUBLE” }, { “name”: “pm1e”, “type”: “DOUBLE” }, { “name”: “pm25e”, “type”: “DOUBLE” }, { “name”: “pm10e”, “type”: “DOUBLE” }, { “name”: “particles03”, “type”: “DOUBLE” }, { “name”: “particles05”, “type”: “DOUBLE” }, { “name”: “particles10”, “type”: “DOUBLE” }, { “name”: “particles25”, “type”: “DOUBLE” }, { “name”: “particles50”, “type”: “DOUBLE” }, { “name”: “particles100”, “type”: “DOUBLE” } ] }) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic <redacted>’ } conn.request(“POST”, “/griddb/v2/gs_clustermfcloud5197/dbs/B2xcGQJy/containers”, payload, headers) res = conn.getresponse() data = res.read() print(data.decode(“utf-8”))</redacted> For the schema, we simply just used a 1:1 mapping to the aqdata dictionary returned by our sensor readings. Even if we don’t intend to use all of these data points, the data readouts are so small, we keep them in. Next, let’s take a look at the script that will push sensor readings to our GridDB Cloud instance. It will read the sensor data every 1 second, and then make an HTTP Request every second to push that data into the Cloud. # SPDX-FileCopyrightText: 2021 ladyada for Adafruit Industries # SPDX-License-Identifier: MIT “”” Example sketch to connect to PM2.5 sensor with either I2C or UART. “”” # pylint: disable=unused-import import time import datetime import board import busio from digitalio import DigitalInOut, Direction, Pull from adafruit_pm25.i2c import PM25_I2C import http.client import json conn = http.client.HTTPSConnection(“cloud5197.griddb.com”) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic <redacted>’ } reset_pin = None i2c = busio.I2C(board.SCL, board.SDA, frequency=100000) # Connect to a PM2.5 sensor over I2C pm25 = PM25_I2C(i2c, reset_pin) print(“Found PM2.5 sensor, reading data…”) while True: time.sleep(1) try: aqdata = pm25.read() # print(aqdata) current_time = datetime.datetime.utcnow().replace(microsecond=0) now = current_time.strftime(‘%Y-%m-%dT%H:%M:%S.%fZ’) # print(now) temp = [] temp.append(now) except RuntimeError: print(“Unable to read from sensor, retrying…”) continue for data in aqdata.values(): temp.append(data) payload = json.dumps([temp]) print(payload) conn.request(“PUT”, “/griddb/v2/gs_clustermfcloud5197/dbs/B2xcGQJy/containers/aqdata/rows”, payload, headers) res = conn.getresponse() data = res.read() print(data.decode(“utf-8”))</redacted> As explained above, there is nothing fancy or extraordinary about this script; it simply reads sensor data and then immediately pushes it out to the Cloud with a timestamp attached. The one thing to note, though, is that the GridDB Cloud is by default in UTC Time, so I have altered the timestamps to be attached to the data results to match UTC for consistency’s sake. And now that we have our data available in the cloud, we can move on to integrating it with other technologies. Home Assistant Home automation software comes in many varieties, with many companies providing their own hubs sold alongside their own products. Home assistant is unique in that it is, 1. completely open source, and 2. made to integrate with all manner of physical and virtual devices. For example, in my own personal home environment, I have Home Assistant running on a Raspberry Pi. To allow it to communicate with other physical devices, I have installed a Zigbee/Z-Wave USB Stick. If you are unfamiliar, Zigbee and Z-Wave are protocols used by smart home devices to communicate with their hubs. In my case, most of my smart light bulbs communicate through Zigbee, for example. In any case, through Home Assistant, we can create various scripts/automations for getting things done. Some examples can be: turn on bedroom lights at 1% at wake up time, or play my Spotify playlist every day at lunch time, etc etc. In the case of our air quality sensor, we can send out a direct HTTP Query against our sensor data and do something if our readings are higher than a certain threshold. For this blog, I have set up my home to turn my living room light bulbs on and to red if the pm1 particles are above a certain threshold, over the past 1 hour. But of course, because Home Assistant is flexible, you could set up any sort of notification method you’d like, including emails, phone push notifications, or even turning on your robovac! So, to continue on, we will need to first formulate our query on sensor readings, learn how to make HTTP Requests through Home Assistant, and learn how to act on the data returned by our query. After we set up our notifcation system for high sensor reading averages, we will also want to display all sensor readings in our Home Assistant dashboard. Formulating our Query First, let’s get our query settled. From my cursory research, pm1 particles are potentially the most threatening to our health because those particles are so tiny they can be inhaled and absorbed directly through the lungs; we will want to set up an alert for these particles. Before we begin, Let’s use cURL or Postman to test our HTTP Queries until we get what we are looking for; in my case, I used postman until I was happy with the results. The query I settled is the following: SELECT AVG(pm1) FROM aqdata WHERE ts > TIMESTAMP_ADD(HOUR, NOW(), -1) AND ts < NOW() AND pm1 > 10 “. This query will look at the data from the past 1 hour and will return with data if the avg value is over 10. Though please note I did not do strict research on what consitutes an unhealthy amount of pm1 particles, this is simply for demo purposes. But now that we have our query, we can figure out how to make HTTP REQUESTS through Home Assistant. Making HTTP Requests with Home Assistant Within the /config directory of the Home Assistant, there are a bunch of yaml files which are used to make configuration and automation changes to the software itself — very flexible and customizable. In our case, we want to add what is called a rest_command inside of the configuration yaml. We also would like to add an automation of what action to take based on the returned data from our rest_command — this will happen in the scripts.yaml file. Please note that all yaml files are included in the source code in the GitHub page linked above. So, here is what our configuration.yaml file will need to make the HTTP Request: #configuration.yaml rest_command: griddb_cloud_get_aqdata: url: https://cloud5197.griddb.com/griddb/v2/gs_clustermfcloud5197/dbs/B2xcGQJy/sql method: post content_type: “application/json” headers: authorization: “Basic <redacted>” payload: ‘[{“type” : “sql-select”, “stmt” : “SELECT AVG(pm1) FROM aqdata WHERE ts > TIMESTAMP_ADD(HOUR, NOW(), -1) AND ts < NOW() AND pm1 > 10 “}]'</redacted> You can see here we included all neccesary things to make an HTTP Sql Select request to our Cloud instance. Here, we are naming our rest_command as griddb_cloud_get_aqdata, so now in our scripts file we can directly call upon this service #scripts.yaml get_griddb_data: sequence: – service: rest_command.griddb_cloud_get_aqdata response_variable: aqdata – if: “{{ aqdata[‘status’] == 200 }}” then: – alias: Parse data variables: results: “{ {aqdata[‘content’][‘results’] }}” – if: “{{ results != ‘null’}}” then: service: light.turn_on target: entity_id: light.living_room data: rgb_color: – 240 – 0 – 0 As you can see, we are calling rest_command.griddb_cloud_get_aqdata as a service. This will run the HTTP Request and then parse the resulting data. If the results are null (meaning no data over our threshold), nothing will happen, but if we do get some data, we can take some action — or in this case, change the living room lights to on and change the RGB to completely red, this way everybody in the home knows that the air quality is compromised in some way. So to explain a bit more about how the yaml files work, the configuration yaml allows you to create services, in our case, the HTTP Request. The scripts file is for actions which may be run many times in many different spots, a bit analoguous to functions in software. And lastly we will use automations.yaml which sets up the trigger to when our script should be running. In our case, we want it run every 10 minutes — that is, every 10 minutes our Home Assistant will look at the average pm1 levels over the past 1 hour and take action if it is too high. #automations.yaml – id: ‘1713475588605’ alias: Get GridDB Data description: get the griddb data trigger: – platform: time_pattern minutes: /10 condition: [] action: – service: script.get_griddb_data mode: single You can see here that the automation is calling upon our script to get the GridDB data. You can also see that our trigger is every 10 minutes. Displaying Sensor Data Onto our Home Assistant Dashboard The last thing we would like to accomplish is to show our sensor readings directly onto the Home Assistant Dashboard. This will allow for all home users to constantly be aware othe readings. To do this, we will need to use the sensors.yaml file and establish new virtual “sensors” of HTTP Requests reading the sensor avg sensor data. For this, we needed to formulate a new SQL Query and this time around, I think we could make do with average readings for the past 24 hours. That query looks like this: SELECT ROUND(AVG(pm1)),ROUND(AVG(pm25)),ROUND(AVG(pm10)),ROUND(AVG(particles03)),ROUND(AVG(particles05)),ROUND(AVG(particles10)),ROUND(AVG(particles25)),ROUND(AVG(particles50)),ROUND(AVG(particles100)) FROM aqdata WHERE ts > TIMESTAMP_ADD(DAY, NOW(), -1) AND ts < NOW(). We simply grab all rounded averages from the past 1 day and use that info to be displayed in the dashboard. #sensors.yaml – platform: rest resource: https://cloud5197.griddb.com/griddb/v2/gs_clustermfcloud5197/dbs/B2xcGQJy/sql method: POST headers: authorization: “Basic <redacted>” Content-Type: application/json payload: ‘[{“type” : “sql-select”, “stmt” : “SELECT ROUND(AVG(pm1)),ROUND(AVG(pm25)),ROUND(AVG(pm10)),ROUND(AVG(particles03)),ROUND(AVG(particles05)),ROUND(AVG(particles10)),ROUND(AVG(particles25)),ROUND(AVG(particles50)),ROUND(AVG(particles100)) FROM aqdata WHERE ts > TIMESTAMP_ADD(DAY, NOW(), -1) AND ts < NOW() “}]’ value_template: “{{ value_json[0].results[0][0] }}” name: “Particle Matter 1” scan_interval: 3600</redacted> Here we use the value_template as the result that will be shown when we select this sensor as an entity. Unfortunately, I could not figure out how to use one singular HTTP Request with multiple values, so I needed to make each value as its own unique HTTP Request, just with a different index array position for the results. The example shown above, for instance, is for pm1 as it is index 0 of our result array due to the container schema. So now that we have our sensor set up, we can go to the dashboard and add it to be displayed. We can click edit (the pencil in the top right corner), then add card, then glance card, and then manually add all of your sensors. For me, the text editor looks like this: show_name: true show_icon: true show_state: true type: glance entities: – entity: sensor.particle_matter_1 – entity: sensor.particle_matter_2_5 – entity: sensor.particle_matter_10 – entity: sensor.particles_03 – entity: sensor.particles_05 – entity: sensor.particles_10 – entity: sensor.particles_25 – entity: sensor.particles_50 – entity: sensor.particles_100 title: Daily Air Quality Averages columns: 3 state_color: false And now we can have our daily averages at a glance. Of course, this just a few of things you can do with this information being so readily available; this really is the beauty of having the open source Home Assistant powering your home automations, and why it’s great to have GridDB Cloud hosting all of your data — it can be accessible from anywhere and you can upload data from anywhere, as long as you have an internet connection. You could, of course, monitor poor air in remote locations, or in locations of loved ones far away and take actions however you see fit. Conclusion In this article, we have learned how to connect physical hardware sensor data and how to push all of that wonderful data onto GridDB Cloud. And then we learned that we can take action directly upon that data with Home

More
Create a Server-less IoT Hub using GridDB Cloud and Microsoft Azure

One of GridDB’s main draws are its inherent strengths at managing an IoT system, with its unique key-container data model and memory-first data architecture. With that in mind, we wanted to showcase how you could build out an IoT system completely on the cloud, using Azure’s IoT Hub for the IoT devices and event handling, and GridDB Cloud for the data storage. The easiest way to manage this is to send data from IoT Hub device via HTTP Requests through the use of Azure Functions. For this article, we want to use the VS Code extensions for the Azure IoT Hub to create all of the resources we will need to create, test, and manage our virtual devices with our IoT Hub. We will also be utilizing the Azure CLI Tools to get a list of possible IP Addresses to be whitelisted in the GridDB Cloud. And then finally, we will create some Azure Functions (with VS Code), deploy them onto your Azure subscription, and then pair it with your Iot Hub. Again, the goal is for the device to emit data, trigger an event, and then send the data payload out to GridDB Cloud, seamlessly. Implementation We will now go through how to make this project work. Prerequisites To follow along, you will need an account with Microsoft Azure with credits (unfortunately we can’t get this up and running for free) and a free trial to GridDB Cloud. The cheapest available IoT Hub is estimated at $10/month. Though not required, this article will reference completing many of the actions through VS Code and through the Azure CLI tool. Creating Azure Resources Now let’s go through and create our Azure resources. We will need to create the following: Azure IoT Hub Virtual Device(s) Azure Functions Creating the Azure IoT Hub This is our main hub; it will be a one-stop shop for our event triggers and our virtual devices. To create a new hub, you can either create it in the Azure Web Portal (straightforward) or through the VS Code extension (potentially easier). To build with VS Code, first install the proper extension: https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-toolkit. Next, open up the Command Palette (F1 key) and select Azure IoT Hub: Create IoT Hub and fill out all of the necessary information, including region, subscription, and choose a globally unique name (the hub needs to be unique because it gets is own public facing DNS name). Adding Virtual Devices to the Hub Now we want to add virtual IoT devices, so once again open up the Command Palette and select Azure IoT Hub: Create Device and give it a device name. You can add as many devices as you’d like, but for now we’ll keep it at one device. We will dicuss how to send data from this device to the Hub later. Creating Azure Functions for Monitor Events Lastly we would like to utilize Azure Functions to monitor our device to detect changes; once an event is detected, we want to be able to control what occurs next, in our case, an HTTP Request to be made. To do so, we will need one more VS Code Extension: Azure Functions And now, once again, open up your Command Palette and select Azure Functions: Create Function App in Azure… to create a function. At this stage, we are naming it, creating a local instance of it, and choosing a runtime, of which we chose the latest version of node.js available. For more information on how these event monitoring system works within Azure, here is some of the documentation: https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-event-grid Sending our Data Payloads to GridDB Cloud With our resources in place, the next step is to set up the event monitoring. We want for our hub to make HTTP Requests of our data payloads whenever it detects that one of its devices emits data. To do so, we will use the Azure Function that we created in the previous step and event monitoring. Azure Function: Source Code for when Event is Triggered We have already created the source code necessary for this step, so please clone the repo as indicated above. The code is very simple: it takes the sample code built by Azure for eventGridTriggers and simply adds a component to make HTTP Requests whenever the trigger is fired. Here is what the source code looks like: const { app } = require(‘@azure/functions’); const axios = require(‘axios’); require(‘dotenv’).config() app.eventGrid(‘eventGridTrigger1’, { handler: (event, context) => { context.log(‘Event grid function processed event:’, event); const container = ‘azureTest’ const auth = { username: process.env.CLOUD_USERNAME, password: process.env.CLOUD_PASSWORD } const headers = { ‘Content-Type’: ‘application/json’ } //HTTP Request to create our container called azureTest const dataCreation = { “container_name”: container, “container_type”: “COLLECTION”, “rowkey”: false, “columns”: [ { “name”: “test”, “type”: “STRING” } ] } const configCreation = { method: ‘POST’, maxBodyLength: Infinity, url: process.env.CLOUD_URL + “/containers”, headers, data: dataCreation, auth } axios.request(configCreation) .then((response) => { context.log(response.statusText); context.log(JSON.stringify(response.data)); }) .catch((error) => { context.log(error); context.error(error) }); //HTTP Request to send data to our container const data = JSON.stringify([ [“GRID EVENT TRIGGERED”] ]); let config = { method: ‘PUT’, maxBodyLength: Infinity, url: process.env.CLOUD_URL + “/containers/” + container + “/rows”, headers, data, auth }; axios.request(config) .then((response) => { context.log(response.statusText); context.log(JSON.stringify(response.data)); }) .catch((error) => { context.log(error); context.error(error) }); } }); We are using the GridDB Cloud Web API to build our HTTP Request to send the payload to be saved. With everything in place, we can deploy our function to the Azure Cloud. You will need to copy the .env-example file and rename it to .env and fill out the values yourself. For the CLOUD URL variable, please just copy the GridDB WebAPI URL from your GridDB Portal as is. Deploy Azure Function to Azure Cloud So now, once again, make sure you have the source code provided by this blog in the repo pointed out above and make sure that your VS Code current folder/project is inside of a directory that has the source code. Next, in the Command Palette, select Azure Functions: Deploy to Function App and select the Function you created above. This will create a zip of your current working directory (meaning all of the source code you downloaded from this article’s repo) and deploy it directly to your Azure Account. And now we want to tie this source code with our IoT Hub and our test virtual device. Tying Azure Function to IoT Hub For this last step, we would like for our hub to utilize the Azure Function created in the previous step. For this step, we will use the Azure Web Portal. Open up the portal and find your IoT Hub Resource. Within that page navigate to: Events -> Azure Function. We will be creating a new event, so give it a name and system topic. For the “filter for event types” box, keep it selected only to Device Telemetry. And lastly, click Configure an Endpoint. In the side panel, most likely everything will self-populate, but if not, choose the azure function app and functions we made previous (ie. function is called: eventGridTrigger1). NOTE: For this to work, your account will need the registry Microsoft.EventGrid in the subscription page enabled. Receiving Data from IoT Hub Lastly, even if we were to trigger an event of some payload to our hub from our device, the HTTP Request would fail because the GridDB Cloud requires all incoming IP Addresses to be whitelisted. The issue now arises that we are using a lightweight Azure Function to handle our events, not a full blown server/VM with one static IP Address. Luckily, there is a way to retrieve all possible IP Addresses used by our Azure Function as shown in these docs: https://learn.microsoft.com/en-us/azure/azure-functions/ip-addresses. You will need to install the Azure CLI and login to your proper account/subscription. Once there, you can run the following command: $ az functionapp show –resource-group <INSERT RESOURCE NAME> –name <INSERT AZURE FUNCTION NAME> –query possibleOutboundIpAddresses –output tsv > outbound.txt This will output a list of about 40 possible IP Addresses and save it into a text file called outbound.txt. As long as we add all of these to the GridDB Cloud Whitelist, we will be able to successfully save all of our events as they occur. Whitelist IP Addresses Obviously, while hand-writing each of these IP addresses is technically feasible, it’d be tedious and horrifying. To add each of these with a simple do while loop in bash, we first need to grab the endpoint with all authorization headers. To do so, open up your GridDB Cloud Web Portal, open up the dev console, navigate to the network tab, filter for XHR requests, clear everything out, and add one of the 40 IP addresses on your list. When you hit submit, you will see a 201 POST Request, now do the following: right click -> Copy Value -> Copy as cURL. You will now have the cURL Command saved in your clipboard. Open up the whitelistIp.sh script and enter in the endpoint unique to you inside of the runCurl script sans the IP Address at the end. Here is what the entire script looks like: #!/bin/bash file=$1 #EXAMPLE #runCurl() { # curl ‘https://cloud57.griddb.com/mfcloud57/dbaas/web-api/contracts/m01wc1a/access-list’ -X POST -H ‘User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:133.0) Gecko/20100101 Firefox/133.0’ -H ‘Accept: application/json, text/plain, */*’ -H ‘Accept-Language: en-US,en;q=0.5’ -H ‘Accept-Encoding: gzip, deflate, br, zstd’ -H ‘Content-Type: application/json;charset=utf-8’ -H ‘Access-Control-Allow-Origin: *’ -H ‘Authorization: Bearer eyJ0eXAiOiJBY2Nlc3MiLCJIUzI1NiJ9.eyJzdWIiOiJkMTg4NjlhZC1mYjUxLTQwMWMtOWQ0Yy03YzI3MGNkZTBmZDkiLCJleHAiOjE3MzEwMTEyMTMsInJvbGUiOiJBZG1pbiIsInN5c3RlbVR5cGUiOjF9.B1MsV9-Nu8m8mJbsp6dKABjJDBjQDdc9aRLffTlTcVM’ -H ‘Origin: https://cloud5197.griddb.com’ -H ‘Connection: keep-alive’ -H ‘Referer: https://cloud5197.griddb.com/mfcloud5197/portal/’ -H ‘Sec-Fetch-Dest: empty’ -H ‘Sec-Fetch-Mode: cors’ -H ‘Sec-Fetch-Site: same-origin’ -H ‘Priority: u=0’ -H ‘Pragma: no-cache’ -H ‘Cache-Control: no-cache’ –data-raw $1 #} runCurl() { <paste VALUE HERE> $1 } while IFS= read -r line; do for ip in ${line//,/ }; do echo “Whitelisting IP Address: $ip” runCurl $ip done done < “$file”</paste> The script will take the outputs from the Azure CLI command to make a string of IP Addresses, seperate out the value by the comma, and then run the cURL command for each individual address. This script expects the file name as a CLI argument when running ie: $ ./whitelistIp.sh outbound.txt. Sending Data from Device to Cloud Now that we’ve got everything set up, the last thing we will do is send data from our virtual device to the cloud. We then expect to see our test string being published into our GridDB Cloud instance. In the source code, we are simply saving a string which contains the characters “GRID EVENT TRIGGERED” to a new container called azureTest as a test to make sure our infrastructure works as expected. From your VS Code window, in the explorer tab, find the Azure IoT Hub Resource panel and find your IoT Hub Manager and its devices. Right click the device you want to use and select Send D2C Messages to IoT Hub (D2C = Device to Cloud). Once your message is sent, you should have a new container in your GridDB Cloud instance called ‘azureTest’ and it should have one row of data inside of it with a value of ‘GRID EVENT TRIGGERED’ — cool! Conclusion And that does it! You can now expand your Cloud IoT Infrastructure as much as you’d like. Make new data containers with real schemas tied to real devices and sync them up and save all data into GridDB Cloud for a purely server-less

More
GridDB Cloud Quick Start Guide

GridDB Cloud v2.0 has officially been released, has a new free tier, and is officially available worldwide. In this quick start guide, you will learn how to insert IoT data into the GridDB Cloud, learn how to test the viability of your connection, and learn the basic CRUD commands (Create, Read, Update, Delete). Contents 0 Preparation 1 Whitelisting Your IP Address 2 Adding GridDB Users and Granting DB Access 3 Checking your GridDB Connection 4 Creating your First Time Series & Collection Containers 5 CRUD with GridDB Cloud (Create, Read, Update, Delete) 6 Ingesting Sample IoT Data 7 Running Data Analysis Preparation Before we jump into how to use the GridDB Cloud, let’s go over some basic preparation. Sign Up for GridDB Cloud Free Plan If you would like to sign up for a GridDB Cloud Free instance, you can do so in the following link: https://form.ict-toshiba.jp/download_form_griddb_cloud_freeplan_e. You can watch a video on how easy it is to sign up here: Clone GitHub Repository To follow along and run these sample code snippets, please clone the repository here: $ git clone https://github.com/griddbnet/Blogs.git –branch cloud-quick-start-worldwide Source Code Overview We have prepared some basic HTTP Requests that can help jump-start your journey with GridDB Cloud. These requests are shared via three different programming interfaces: CLI scripts (aka bash, in the bash/ dir), node.js, and python. For example, the first command will be to make sure the connection between your machine and the cloud can be made; to do so, we will run an HTTP Requests to a specific endpoint using just bash (with cURL), and then with node.js/python scripts. Set Up For Running Sample Code To ensure your HTTP Requests have the proper connection details, copy the env.example file (from the GitHub Repo shared in the section above) and rename it to .env. You must fill in your personal variables (ie. your GridDB Cloud Web API endpoint as well as your user/pass combo encoded into base64). To get proper creds, you will need to gather the username/password combination into base 64separated by a colon; for example: admin:admin becomes YWRtaW46YWRtaW4=. You can encode those values by using the following website: https://www.base64encode.org/ The WEBAPI Url can found on the main page of your GridDB Dashboard. Here is an example of a .env file. export GRIDDB_WEBAPI_URL=”https://cloud51e32re97.griddb.com:443/griddb/v2/gs_clustermfcloud5314927/dbs/ZV8YUerterlQ8″ export USER_PASS=”TTAxZ2FYMFrewwerZrRy1pc3JrerehZWw6avdfcvxXNyYWVs” Now run $ source .env to load these values into your environment so that you can run these scripts from your CLI. First Steps with GridDB Cloud Your GridDB Cloud instance can be communicated with via HTTP Requests; every action needed to interact with GridDB will require formulating and issuing an HTTP Request with different URLs, parameters, methods, and payload bodies. 1. Whitelisting Your IP Address If you haven’t already, please whitelist your public IP address in the network settings of your GridDB Cloud Management dashboard. You can find your own IP Address by using a simple Google Search: “What is my IP Address?” Or you can go here: https://whatismyipaddress.com/ Go to the network tab and add in your IP address. Note: CIDR Ranges are compatible, so please feel free to add your own if you know it. 2. Adding GridDB Users and Granting DB Access Next, we should create a new GridDB User. From the side panel, click the icon which says GridDB User. From this page, click CREATE DATABASE USER. This user’s name and password will be attached to all of our HTTP Requests as a Basic Authorization Header when using the Web API. Once you create the new user, you will also need to grant access to your database. Click on the user from the table of users in GridDB Users page and from this page, grant access to your database (either READ or ALL). Now we can move on to making actual HTTP requests. 3. Checking your GridDB Connection Let’s start with a sanity check and make sure that we can reach out to the GridDB Cloud instance. With your .env file made and ready to go, you can make sure you get the variables loaded into your environment using the following command: $ source .env. And now run the bash script: If you see a status code of 200, that’s good! That means your connection was succesful. If you are met with a status code of 403 Forbidden, that likely means you have not whitelisted your machine’s IP Address in the network tab. Likewise, a returned code of 401 Unauthorized likely means your credentials are incorrect — please make sure you use user:pass encoded into base64! Check Connection URL Endpoint The Web API uses a base url which we will use and expand upon to build out our requests. The base url looks like this: https://cloud<number>.griddb.com/griddb/v2/<clusterName>/dbs/<database name> To check that our connection exists, we can append the following to our base url /checkConnection. Because we are not sending any data back to the server, we will use the GET HTTP method. Lastly, we need to include basic authentication in our HTTP Request’s headers. For this, we will need to include our username and password encoded into base64. With all that said, here is the final result https://cloud51ergege97.griddb.com/griddb/v2/gs_clustermfcloud51fgerge97/dbs/B2vderewDSJy/checkConnection We can now use this URL with any number of programming languages to communicate with our database. And as a note, the examples in this article will contain the URLs and credentials hardcoded into the example, but the source code (linked above) uses environment variables. cURL Request To check our connection with cURL, you can use the following command (check_connection.sh) curl -i –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/checkConnection’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ Because it’s a GET request, it’s rather simple and we only needed to add in the authorization header. You should be able to run this and get an HTTP Response of 200. Python Request Here is that same request written in Python # check_connection.py import requests url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/checkConnection” payload = {} headers = { ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“GET”, url, headers=headers, data=payload) print(response.status_code) node.js Request //checkConnection.js const request = require(‘request’); const options = { ‘method’: ‘GET’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/checkConnection’, ‘headers’: { ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ } }; request(options, function (error, response) { if (error) throw new Error(error); console.log(“Response Status Code: “, response.statusCode); }); 4. Creating your First Time Series & Collection Containers With our connection firmly established, we can create our first containers — both Collection and Time Series — of which are similar to relational tables. You can read more about that here: GridDB Data Model. The URL suffix looks like this: /containers. This request can sometimes require a multitude of data and can have a big range, therefore this request will require an HTTP method of POST. The body of the request requires container name, container type, whether a rowkey exists (bool), and the schema. Let’s first take a look at the structure outside of the context of an HTTP Request and then we will send it inside of a Request body. We will also need to include in our Request’s headers that we are sending a data payload of type JSON like so: ‘Content-Type: application/json’ Time Series Container First, let’s create a Time Series container — we can see here that we select the container type as TIME_SERIES and the first column is of type timestamp. There is also a rowkey section, but this is optional as in a time series container, the rowkey is always the timestamp by default. { “container_name”: “device1”, “container_type”: “TIME_SERIES”, “rowkey”: true, “columns”: [ { “name”: “ts”, “type”: “TIMESTAMP” }, { “name”: “co”, “type”: “DOUBLE” }, { “name”: “humidity”, “type”: “DOUBLE” }, { “name”: “light”, “type”: “BOOL” }, { “name”: “lpg”, “type”: “DOUBLE” }, { “name”: “motion”, “type”: “BOOL” }, { “name”: “smoke”, “type”: “DOUBLE” }, { “name”: “temp”, “type”: “DOUBLE” } ] } Now we simply attach this to the body when we make our Request and we should create our new container. If successful, you should get a status code of 201 (Created). cURL #create_container.sh curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘{ “container_name”: “device1”, “container_type”: “TIME_SERIES”, “rowkey”: true, “columns”: [ { “name”: “ts”, “type”: “TIMESTAMP” }, { “name”: “co”, “type”: “DOUBLE” }, { “name”: “humidity”, “type”: “DOUBLE” }, { “name”: “light”, “type”: “BOOL” }, { “name”: “lpg”, “type”: “DOUBLE” }, { “name”: “motion”, “type”: “BOOL” }, { “name”: “smoke”, “type”: “DOUBLE” }, { “name”: “temp”, “type”: “DOUBLE” } ] }’ Python #create_container.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers” payload = json.dumps({ “container_name”: “device1”, “container_type”: “TIME_SERIES”, “rowkey”: True, “columns”: [ { “name”: “ts”, “type”: “TIMESTAMP” }, { “name”: “co”, “type”: “DOUBLE” }, { “name”: “humidity”, “type”: “DOUBLE” }, { “name”: “light”, “type”: “BOOL” }, { “name”: “lpg”, “type”: “DOUBLE” }, { “name”: “motion”, “type”: “BOOL” }, { “name”: “smoke”, “type”: “DOUBLE” }, { “name”: “temp”, “type”: “DOUBLE” } ] }) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.status_code) node.js //createContainer.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify({ “container_name”: “device1”, “container_type”: “TIME_SERIES”, “rowkey”: true, “columns”: [ { “name”: “ts”, “type”: “TIMESTAMP” }, { “name”: “co”, “type”: “DOUBLE” }, { “name”: “humidity”, “type”: “DOUBLE” }, { “name”: “light”, “type”: “BOOL” }, { “name”: “lpg”, “type”: “DOUBLE” }, { “name”: “motion”, “type”: “BOOL” }, { “name”: “smoke”, “type”: “DOUBLE” }, { “name”: “temp”, “type”: “DOUBLE” } ] }) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(“Response Status Code: “, response.statusCode); }); Collection Container Now let’s create a collection container. These containers don’t require a time series column (but they are allowed) and also don’t require rowkey to be set to true. Here are some examples: cURL #create_collection.sh curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘{ “container_name”: “deviceMaster”, “container_type”: “COLLECTION”, “rowkey”: true, “columns”: [ { “name”: “equipment”, “type”: “STRING” }, { “name”: “equipmentID”, “type”: “STRING” }, { “name”: “location”, “type”: “STRING” }, { “name”: “serialNumber”, “type”: “STRING” }, { “name”: “lastInspection”, “type”: “TIMESTAMP” }, { “name”: “information”, “type”: “STRING” } ] }’ Python #create_collection.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers” payload = json.dumps({ “container_name”: “deviceMaster”, “container_type”: “COLLECTION”, “rowkey”: True, “columns”: [ { “name”: “equipment”, “type”: “STRING” }, { “name”: “equipmentID”, “type”: “STRING” }, { “name”: “location”, “type”: “STRING” }, { “name”: “serialNumber”, “type”: “STRING” }, { “name”: “lastInspection”, “type”: “TIMESTAMP” }, { “name”: “information”, “type”: “STRING” } ] }) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.status_code) node.js //createCollection.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify({ “container_name”: “deviceMaster”, “container_type”: “COLLECTION”, “rowkey”: true, “columns”: [ { “name”: “equipment”, “type”: “STRING” }, { “name”: “equipmentID”, “type”: “STRING” }, { “name”: “location”, “type”: “STRING” }, { “name”: “serialNumber”, “type”: “STRING” }, { “name”: “lastInspection”, “type”: “TIMESTAMP” }, { “name”: “information”, “type”: “STRING” } ] }) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.statusCode); }); 5. CRUD with GridDB Cloud (Create, Read, Update, Delete) 5. CRUD with GridDB Cloud (Create, Read, Update, Delete) Next, let’s go over the commands to Create, Read, Update, and Delete. Adding Rows of Data (Create) We have already created some containers before, but to add to that, we will be creating rows of data to add to our container. We can add rows of data directly inside of our containers. The URL suffix: /containers/:container/rows To PUT a row of data into our container, we will need to use the HTTP Method PUT. Similar to before, we will need to specify that our content is JSON and we will include the row data in our Request body. You can add multiple rows at once, you just need to make sure that your payload is formed to accomdate extra rows and that you don’t have a trailing comma on the last row. Let’s add rows to our device1 container. [ [“2024-01-09T10:00:01.234Z”, 0.003551, 50.0, false, 0.00754352, false, 0.0232432, 21.6], [“2024-01-09T11:00:01.234Z”, 0.303551, 60.0, false, 0.00754352, true, 0.1232432, 25.3], [“2024-01-09T12:00:01.234Z”, 0.603411, 70.0, true, 0.00754352, true, 0.4232432, 41.5] ] You of course also need to be sure that your row’s schema matches your container’s. If it doesn’t, you will be met with an error message and a status code of 400 (Bad Request). cURL https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers #add_rows.sh curl –location –request PUT ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ [“2024-01-09T10:00:01.234Z”, 0.003551, 50.0, false, 0.00754352, false, 0.0232432, 21.6], [“2024-01-09T11:00:01.234Z”, 0.303551, 60.0, false, 0.00754352, true, 0.1232432, 25.3], [“2024-01-09T12:00:01.234Z”, 0.603411, 70.0, true, 0.00754352, true, 0.4232432, 41.5] ]’ Python #add_rows.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows” payload = json.dumps([ [ “2024-01-09T10:00:01.234Z”, 0.003551, 50, False, 0.00754352, False, 0.0232432, 21.6 ], [ “2024-01-09T11:00:01.234Z”, 0.303551, 60, False, 0.00754352, True, 0.1232432, 25.3 ], [ “2024-01-09T12:00:01.234Z”, 0.603411, 70, True, 0.00754352, True, 0.4232432, 41.5 ] ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“PUT”, url, headers=headers, data=payload) print(response.text) node.js //addRows.js var request = require(‘request’); var options = { ‘method’: ‘PUT’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ [ “2024-01-09T10:00:01.234Z”, 0.003551, 50, false, 0.00754352, false, 0.0232432, 21.6 ], [ “2024-01-09T11:00:01.234Z”, 0.303551, 60, false, 0.00754352, true, 0.1232432, 25.3 ], [ “2024-01-09T12:00:01.234Z”, 0.603411, 70, true, 0.00754352, true, 0.4232432, 41.5 ] ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); Querying Container (Read) After writing to our containers, we will want to read from our containers. The URL suffix is exactly the same as before: /:cluster/dbs/:database/containers/:container/rows except now we will be using the POST method request. The data expected by the server in these requests are how we expect our row data returned to us — for example, we can choose a row limit, an offset, any conditions, and a sort method. Here is what that body looks like: { “offset” : 0, “limit” : 100, “condition” : “temp >= 30”, “sort” : “temp desc” } The one caveat with making this Request is that because it is a POST request, you will need to send something in the body of the request. Any of the parameters above will do, but including the limit is likely the easiest option to include and has the added benefit of reducing server strain. If successful, you should get a server response with a status code of 200 (OK) and a body with the data requested. #query_container.sh { “columns”: [ { “name”: “ts”, “type”: “TIMESTAMP”, “timePrecision”: “MILLISECOND” }, { “name”: “co”, “type”: “DOUBLE” }, { “name”: “humidity”, “type”: “DOUBLE” }, { “name”: “light”, “type”: “BOOL” }, { “name”: “lpg”, “type”: “DOUBLE” }, { “name”: “motion”, “type”: “BOOL” }, { “name”: “smoke”, “type”: “DOUBLE” }, { “name”: “temp”, “type”: “DOUBLE” } ], “rows”: [ [ “2024-01-09T12:00:01.234Z”, 0.603411, 70.0, true, 0.00754352, true, 0.4232432, 41.5 ] ], “offset”: 0, “limit”: 100, “total”: 1 } cURL curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘{ “offset” : 0, “limit” : 100, “condition” : “temp >= 30”, “sort” : “temp desc” }’ Python #query_container.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows” payload = json.dumps({ “offset”: 0, “limit”: 100, “condition”: “temp >= 30”, “sort”: “temp desc” }) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.text) nodejs //queryContainer.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify({ “offset”: 0, “limit”: 100, “condition”: “temp >= 30”, “sort”: “temp desc” }) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); Updating a Row (Update) To cover updates, adding rows of data can be considered updating, but we can also directly update a row (if the container has rowkeys). The way it works is if you push a row of data to your container which has rowkey set as true, and send up a row of data with a rowkey that already exists in your container, it will update the row with whatever new information is pushed along. Let’s push data to our deviceMaster collection container once to add data, and then again to update the row. Let’s craft our row of data [ [“device1”, “01”, “CA”, “23412”, “2023-12-15T10:45:00.032Z”, “working”] ] Now let’s form our HTTP Requests cURL #update_collection.sh curl -i –location –request PUT ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ [“device1”, “01”, “CA”, “23412”, “2023-12-15T10:45:00.032Z”, “working”] ]’ Python #update_collection.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows” payload = json.dumps([ [ “device1”, “01”, “CA”, “23412”, “2023-12-15T10:45:00.032Z”, “working” ] ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“PUT”, url, headers=headers, data=payload) print(response.text) node.js //updateCollection.js var request = require(‘request’); var options = { ‘method’: ‘PUT’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ [ “device1”, “01”, “CA”, “23412”, “2023-12-15T10:45:00.032Z”, “working” ] ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); Updating a Row And now with that data in there, if you change any of the values outside of the first one (the rowkey, the device name), it will update that device’s metadata will keeping the row inside of your container. curl -i –location –request PUT ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ [“device1”, “01”, “NY”, “23412”, “2023-12-20T10:45:00.032Z”, “working”] ]’ Here we are changing the location and the time of last inspection. If you look at your dashboard, the values will be have been updated. Deleting a Row (Delete) We can delete a row simply by using the appropriate HTTP Method (Delete) and then sending in a valid rowkey and container to our server. Let’s delete our deviceMaster’s lone row. The body of the request looks like this. You can add multiple rowkeys inside here to delete multiple rows at once. [ “device1” ] cURL #delete_row.sh curl -v –location –request DELETE ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ “device1” ]’ Python #delete_row.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows” payload = json.dumps([ “device1” ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“DELETE”, url, headers=headers, data=payload) print(response.status_code) node.js //deleteRow.js var request = require(‘request’); var options = { ‘method’: ‘DELETE’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/deviceMaster/rows’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ “device1” ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.statusCode); }); Deleting a Row from Time Series Container You can also delete the row of a time series container. As stated before, the time stamp will always be the rowkey in a time series container, so here we just add our time stamp and those rows will be deleted. #delete_container.sh curl –location –request DELETE ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers/device1/rows’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ “2024-01-09T10:00:01.234Z”, “2024-01-09T12:00:01.234Z” ]’ TQL Next, let’s try running a TQL Query. If you’re unfamiliar, TQL is GridDB’s special query language. Let’s run a simple query. First, this is what the URL looks like base url + /tql The body of the request will need the container name followed by your query statement. We can also query multiple containers at once: [ {“name” : “deviceMaster”, “stmt” : “select * limit 100”, “columns” : null}, {“name” : “device1”, “stmt” : “select * where temp>=24”, “columns” : [“temp”, “co”]}, ] And this is the response of the above query: [ { “columns”: [ { “name”: “equipment”, “type”: “STRING” }, { “name”: “equipmentID”, “type”: “STRING” }, { “name”: “location”, “type”: “STRING” }, { “name”: “serialNumber”, “type”: “STRING” }, { “name”: “lastInspection”, “type”: “TIMESTAMP”, “timePrecision”: “MILLISECOND” }, { “name”: “information”, “type”: “STRING” } ], “results”: [ [ “device1”, “01”, “NY”, “23412”, “2023-12-20T10:45:00.032Z”, “working” ] ], “offset”: 0, “limit”: 100, “total”: 1, “responseSizeByte”: 31 }, { “columns”: [ { “name”: “temp”, “type”: “DOUBLE” }, { “name”: “co”, “type”: “DOUBLE” } ], “results”: [ [ 25.3, 0.303551 ], [ 41.5, 0.603411 ] ], “offset”: 0, “limit”: 1000000, “total”: 2, “responseSizeByte”: 32 } ] cURL #tql.sh curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/tql’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘ [ {“name” : “deviceMaster”, “stmt” : “select * limit 100”, “columns” : null}, {“name” : “device1”, “stmt” : “select * where temp>=24”, “columns” : [“temp”, “co”]} ] ‘ Python #tql.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/tql” payload = json.dumps([ { “name”: “deviceMaster”, “stmt”: “select * limit 100”, “columns”: None }, { “name”: “device1”, “stmt”: “select * where temp>=24”, “columns”: [ “temp”, “co” ] } ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.text) node.js //tql.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/tql’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ { “name”: “deviceMaster”, “stmt”: “select * limit 100”, “columns”: null }, { “name”: “device1”, “stmt”: “select * where temp>=24”, “columns”: [ “temp”, “co” ] } ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); SQL On top of TQL, GridDB and the GridDB Cloud also have SQL functionality. Though the SQL functionality for the GridDB Cloud is limited to reading results (SELECT) and updating some rows (UPDATE). SQL SELECT base url + /sql [ {“type” : “sql-select”, “stmt” : “SELECT * FROM deviceMaster”}, {“type” : “sql-select”, “stmt” : “SELECT temp, co FROM device1 WHERE temp>=24″} ] It is very similar to TQL but we call on the container name from within the query itself, just like “normal” SQL statements. Here is the response body: [ { “columns”: [ { “name”: “equipment”, “type”: “STRING” }, { “name”: “equipmentID”, “type”: “STRING” }, { “name”: “location”, “type”: “STRING” }, { “name”: “serialNumber”, “type”: “STRING” }, { “name”: “lastInspection”, “type”: “TIMESTAMP”, “timePrecision”: “MILLISECOND” }, { “name”: “information”, “type”: “STRING” } ], “results”: [ [ “device1”, “01”, “CA”, “23412”, “2023-12-15T10:45:00.032Z”, “working” ] ], “responseSizeByte”: 47 }, { “columns”: [ { “name”: “temp”, “type”: “DOUBLE” }, { “name”: “co”, “type”: “DOUBLE” } ], “results”: [ [ 25.3, 0.303551 ], [ 41.5, 0.603411 ] ], “responseSizeByte”: 32 } ] cURL #sql_select.sh curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ {“type” : “sql-select”, “stmt” : “SELECT * FROM deviceMaster”}, {“type” : “sql-select”, “stmt” : “SELECT temp, co FROM device1 WHERE temp>=24”} ] ‘ Python #sql_select.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql” payload = json.dumps([ { “type”: “sql-select”, “stmt”: “SELECT * FROM deviceMaster” }, { “type”: “sql-select”, “stmt”: “SELECT temp, co FROM device1 WHERE temp>=24” } ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.text) node.js //sqlSelect.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ { “type”: “sql-select”, “stmt”: “SELECT * FROM deviceMaster” }, { “type”: “sql-select”, “stmt”: “SELECT temp, co FROM device1 WHERE temp>=24” } ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); SQL SELECT GROUP BY RANGE Because we are using SQL Select, you can use the GridDB’s Group By Range as well. You can learn more about that here: Exploring GridDB’s Group By Range Functionality. We will make our query and group by hours: [ {“type” : “sql-select”, “stmt” : “SELECT temp, co FROM device1 WHERE ts > TO_TIMESTAMP_MS(1594515625984) AND ts < TO_TIMESTAMP_MS(1595040779336) GROUP BY RANGE (ts) EVERY (1, HOUR)”} ] cURL #sql_select_groupby.sh curl -i –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ {“type” : “sql-select”, “stmt” : “SELECT temp, co FROM device1 WHERE ts > TO_TIMESTAMP_MS(1594515625984) AND ts < TO_TIMESTAMP_MS(1595040779336) GROUP BY RANGE (ts) EVERY (1, HOUR)”} ] ‘ Python # sql_select_groupby.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql” payload = json.dumps([ { “type”: “sql-select”, “stmt”: “SELECT temp, co FROM device1 WHERE ts > TO_TIMESTAMP_MS(1594515625984) AND ts < TO_TIMESTAMP_MS(1595040779336) GROUP BY RANGE (ts) EVERY (1, HOUR)” } ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0′ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.text) SQL Insert The base URL is the same as SELECT, but you need to append ‘update’ base url + /:cluster/dbs/:database/sql/update [ {“stmt” : “insert into deviceMaster(equipment, equipmentID, location, serialNumber, lastInspection, information) values(‘device2′, ’02’, ‘MA’, ‘34412’, TIMESTAMP(‘2023-12-21T10:45:00.032Z’), ‘working’)”} ] cURL #sql_insert.sh curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql/update’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ {“stmt” : “insert into deviceMaster(equipment, equipmentID, location, serialNumber, lastInspection, information) values(‘\”device2’\”, ‘\”02’\”, ‘\”MA’\”, ‘\”34412’\”, TIMESTAMP(‘\”2023-12-21T10:45:00.032Z’\”), ‘\”working’\”)”} ]’ Python #sql_insert.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql/update” payload = json.dumps([ { “stmt”: “insert into deviceMaster(equipment, equipmentID, location, serialNumber, lastInspection, information) values(‘device2′, ’02’, ‘MA’, ‘34412’, TIMESTAMP(‘2023-12-21T10:45:00.032Z’), ‘working’)” } ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.text) node.js //sqlInsert.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql/update’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ { “stmt”: “insert into deviceMaster(equipment, equipmentID, location, serialNumber, lastInspection, information) values(‘device2′, ’02’, ‘MA’, ‘34412’, TIMESTAMP(‘2023-12-21T10:45:00.032Z’), ‘working’)” } ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); SQL Update The base URL is the same as above, but you need to append ‘update’ base url + /sql/update [ {“stmt” : “update deviceMaster set location = ‘LA’ where equipmentID = ’01′”} ] This command allows you to Update, similar to the NoSQL method described above. We can both update existing rows, or update containers to add new rows. cURL #sql_update.sh curl -i -X POST –location ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql/update’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ {“stmt” : “update deviceMaster set location = ‘\”LA’\” where equipmentID = ‘\”01’\””} ]’ Python #sql_update.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql/update” payload = json.dumps([ { “stmt”: “update deviceMaster set location = ‘LA’ where equipmentID = ’01′” } ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“POST”, url, headers=headers, data=payload) print(response.text) node.js //sqlUpdate.js var request = require(‘request’); var options = { ‘method’: ‘POST’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/sql/update’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ { “stmt”: “update deviceMaster set location = ‘LA’ where equipmentID = ’01′” } ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.body); }); Dropping Containers We can also drop containers. base url + /containers The request’s body can contain one or multiple container names of which will be dropped once we make our request. [ “deviceMaster” ] If successful, you will receive a status code of 204 (No Content) cURL #delete_container.sh curl -i –location –request DELETE ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers’ \ –header ‘Content-Type: application/json’ \ –header ‘Authorization: Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ \ –data ‘[ “deviceMaster” ]’ Python #delete_container.py import requests import json url = “https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers” payload = json.dumps([ “deviceMaster” ]) headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, ‘User-Agent’: ‘PostmanRuntime/7.29.0’ } response = requests.request(“DELETE”, url, headers=headers, data=payload) print(response.status_code) node.js //deleteContainer.js var request = require(‘request’); var options = { ‘method’: ‘DELETE’, ‘url’: ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/containers’, ‘headers’: { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’ }, body: JSON.stringify([ “deviceMaster” ]) }; request(options, function (error, response) { if (error) throw new Error(error); console.log(response.statusCode); }); 6. Ingesting Sample IoT Data Next, let’s take a look at ingesting CSV data. We will ingest IoT data from Kaggle. You can download the raw file from their website here: https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k. Here is the python script you can use to ingest the data into our device2 container. We have already created our device1 container, but we will use the same schema to create a new container called device2 #data_ingest.py import pandas as pd import numpy as np import json import requests from datetime import datetime as dt, timezone headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, “User-Agent”:”PostmanRuntime/7.29.0″ } base_url = ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/’ data_obj = { “container_name”: “device1”, “container_type”: “TIME_SERIES”, “rowkey”: True, “columns”: [] } input_variables = [ “ts”,”co”,”humidity”,”light”,”lpg”,”motion”,”smoke”,”temp” ] data_types = [ “TIMESTAMP”, “DOUBLE”, “DOUBLE”, “BOOL”, “DOUBLE”, “BOOL”, “DOUBLE”,”DOUBLE” ] for variable, data_type in zip(input_variables, data_types): column = { “name”: variable, “type”: data_type } data_obj[“columns”].append(column) #Create Container url = base_url + ‘containers’ r = requests.post(url, json = data_obj, headers = headers) All of this is pretty straightforward. Next, let’s actually ingest the data. To do so, we will use the pandas python library to read the csv file which we will include inside of our body’s request to be ingested. The pandas library also allows us to easily target and transform specific columns in our csv data to make it possible to be ingested. And one last note, because the csv file is over eight megabytes, we will break up our data into chunks and send them to our Cloud that way. Download data from here: https://www.kaggle.com/datasets/garystafford/environmental-sensor-data-132k Here is the rest of the code: iot_data = pd.read_csv(‘iot_telemetry_data.csv’) #2023-12-15T10:25:00.253Z iot_data[‘ts’] = pd.to_datetime(iot_data[‘ts’], unit=’s’).dt.strftime(“%Y-%m-%dT%I:%M:%S.%fZ”) print(iot_data[“ts”]) iot_data = iot_data.drop(‘device’, axis=1) #print(iot_data.dtypes) iot_subsets = np.array_split(iot_data, 20) #Ingest Data url = base_url + ‘containers/device1/rows’ for subset in iot_subsets: #Convert the data in the dataframe to the JSON format iot_subsets_json = subset.to_json(orient=’values’) request_body_subset = iot_subsets_json r = requests.put(url, data=request_body_subset, headers=headers) print(‘~~~~~~~~~~~~~~~~~~~~~~~~~~~~~’) print(‘_______________’,r.text,’___________’) print(‘~~~~~~~~~~~~~~~~~~~~~~~~~~~~~’) if r.status_code > 299: print(r.status_code) break else: print(‘Success for chunk’) As you run the script, it should be printing our successful messages for each chunk uploaded, complete with how many rows got successfully updated. Once done, you can check out your device1 with an HTTP Request query or through the portal. 7. Running Data Analysis Lastly, let’s do some simple python analysis. We will query our iot data and then use that data to do some simple analysis and charting of our data. #data_analysis.py import pandas as pd import numpy as np import requests import plotly.express as px from IPython.display import Image # ts object # co float64 # humidity float64 # light bool # lpg float64 # motion bool # smoke float64 # temp float64 headers = { ‘Content-Type’: ‘application/json’, ‘Authorization’: ‘Basic TTAxMU1sd0MxYS1pc3ewrwqJhZWw6aXNyYWVs’, “User-Agent”:”PostmanRuntime/7.29.0″ } base_url = ‘https://cloud5197422.griddb.com/griddb/v2/gs_clustermfcloud5197422/dbs/B2vdfewfwDSJy/’ sql_query1 = (f”””SELECT * from device1 WHERE co < 0.0019050147565559603 “””) url = base_url + ‘sql’ request_body = ‘[{“type”:”sql-select”, “stmt”:”‘+sql_query1+'”}]’ data_req1 = requests.post(url, data=request_body, headers=headers) myJson = data_req1.json() dataset = pd.DataFrame(myJson[0][“results”],columns=[myJson[0][“columns”][0][“name”],myJson[0][“columns”][1][“name”],myJson[0][“columns”][2][“name”], myJson[0][“columns”][3][“name”],myJson[0][“columns”][4][“name”],myJson[0][“columns”][5][“name”],myJson[0][“columns”][6][“name”],myJson[0][“columns”][7][“name”]]) print(dataset) lowest_col = dataset.sort_values(‘co’, ascending=False).head(20000) scatter_plot = px.scatter(lowest_col, x=’ts’, y=’co’, size=’co’, color=’co’, color_continuous_scale=’plasma’, hover_name=’co’) # Customize the plot scatter_plot.update_layout( title=’Data Analysis’, xaxis_title=’CO2 Emissions’, yaxis_title=’Time’ ) scatter_plot.update_layout(template=’plotly_dark’) # Show the plot scatter_plot.show() Conclusion And with that, we have shown you how to interact with your GridDB Cloud and do all sorts of database-related

More
Creating an LLM to Generate SQL Queries for GridDB

Introduction GridDB is a time series database available both on-premise and on the cloud optimized for IoT and Big Data that was developed by Toshiba Digital Solutions Corporation. It features a unique key-container model designed specifically to handle both metadata and time series data. Its in-memory architecture allows incredible ingestion and query performance. GridDB is also horizontally scalable to improve both performance and reliability. With the rise in popularity of using Large Language Models (LLMs), general purpose models like GPT-4 have attracted the most interest and media coverage, but many other specialized models exist for the purpose of code generation. While many of these models are extremely large and require immense compute power to train and execute, some LLMs can be trained and executed on platforms that are practical and cost effective. Every database uses a slightly different form of SQL and GridDB is no different. In particular, GridDB has different time functions compared to other SQL databases and also uses a unique Key-Container data model where it is encouraged to store homogeneous data in multiple tables. With these differences in mind, the LLM must be fine-tuned for GridDB and experience tells us that an off the shelf LLM would not produce suitable queries. There are both consumer and business use cases for using an LLM to query GridDB. For consumers, the LLM would enable the end user to ask simple questions about their own data such as “When do I consume the most electricity?”. For business analysts and managers, it extends their business intelligence tools allowing for more ad-hoc queries to dive deeper into their organization’s data. In this technical report, we demonstrate how software developers can utilize an LLM to generate queries for use with GridDB’s SQL interface to enhance their application. The process used to create and use an LLM for GridDB is as follows: Determine SQL Generation performance of other models and the feasibility of fine tuning these other models. Find the datasets that were used to train the models selected in Step 1. Filter out any queries that are not supported by GridDB and fine-tune the model to ensure accuracy is still reasonable. Create a data set that uses GridDB specific features: time series range selection and Key-Container. Fine-tune the model with our new GridDB specific dataset and evaluate the accuracy as measured by the percentage of the number of responses that matched the human answer in the evaluation data split. Demonstrate inference within a Python Flask application. Source Code This project’s code can found on the griddbnet GitHub page: https://github.com/griddbnet/sql_llm_model Existing Model Evaluation Closed source, general purpose models such as GPT-4 were immediately dismissed as the bulk of SQL related material they consumed for training would have been for mainstream databases such as Oracle, Postgres, and SQLite. Not only that, but their closed source nature would make it difficult to train or fine-tune the models for GridDB. DeFog.AI’s SQLCoder model based on LLAMA was tested and performed well, but the original models did not support GridDB’s time series SQL semantics. Furthermore, the hardware and time requirements to fine-tune or run inference with SQLCoder was not feasible. Likewise, StarCoder was examined and its reported accuracy was deemed to be significantly poorer than SQLCoder while being just as difficult to fine-tune. The last two models left for consideration were OpenAI’s GPT-2 and Google’s T5-Small models. After fine tuning both with our selected datasets, T5-Small was generally more accurate and trouble-free while performing fine tuning. Other parties had already used GPT-2 and T5-Small to create SQL generating LLMs and between the two, accuracy was better with T5-Small. Dataset Selection Three individual datasets with similar characteristics that had been used to train text to SQL models were found: https://huggingface.co/datasets/b-mc2/sql-create-context https://huggingface.co/datasets/Clinton/Text-to-sql-v1 https://huggingface.co/datasets/knowrohit07/know_sql Between the three datasets, they have nearly 400,000 rows of data. Each row contains: A context or the SQL schema The question to be converted into a SQL query. The answer, the SQL query based on the question and context. For example: { “answer” : “SELECT COUNT(*) FROM head WHERE age > 56”, “question” : “How many heads of the departments are older than 56 ?” “context” : “CREATE TABLE head (age INTEGER)” } Dataset Filtering The first problem seen in these third party datasets is that some of the queries do not work with GridDB. A simple script was created to execute the context statement and then the query statement for each row in the dataset, if the query was executed successfully, then it would be saved to the filtered dataset. for line in fd.readlines(): data = json.loads(line) data[answer_name] = re.sub(‘”‘, ‘\”, data[answer_name]) try: for stmt in data[context_name].split(“;”): stmt = stmt.strip() table = stmt.split(” “)[2] curs.execute(“DROP TABLE IF EXISTS “+table) curs.execute(stmt) except: pass try: curs.execute(data[answer_name]) good=good+1 print(json.dumps({“question”: data[question_name], “context”: data[context_name], “answer”: data[answer_name]})) except: bad=bad+1 Of the nearly 400,000 queries in the original datasets, 170,000 of them functioned in GridDB and were used to perform initial model fine tuning. The dataset was split 80/10/10 for training, validation, and testing and fine tuning ran on top of the base T5-small model. Dataset Creation None of the data in the filtered dataset supports GridDB specific functionality. While GridDB has many unique SQL features not shared with any other database, we wanted to focus on the two most basic. The first is GridDB’s Key-Container data model. Most relational databases store all data in one table, while GridDB recommends splitting data into multiple tables. For example, data for device #1 would be stored in tsdata_1 and data for device #20 in tsdata_20 so the human question of “What is the maximum temperature recorded on device 12?” would be the SQL query of SELECT max(temp) FROM tsdata_12 instead of SELECT max(temp) FROM tsdata WHERE device = ’12′” as would be normal in another database. The second feature we wanted to support was fetching data, in particular aggregations for a given period. GridDB uses the TIMESTAMP() SQL function to compare time series data in a query. For example, to select average temperature in 2024, you would use the query SELECT avg(temp) from tsdata_12 where ts >= TIMESTAMP(‘2024-01-01’) and ts &lt; TIMESTAMP(‘2025-01-01’). To do this, a new tool was developed that could generate a human question with corresponding SQL query answer based on a given template. Iterating on the template with random values for the time period, container identifier, aggregation, etc would build a reasonable sized data set to fine tune the model with. An example input template: { “context” : “CREATE TABLE IF NOT EXISTS devices (device_id INTEGER, ts TIMESTAMP, co DOUBLE, humidity DOUBLE,light BOOL,lpg DOUBLE,motion BOOL,smoke DOUBLE,temp DOUBLE);”, “queries” : [ { “columns” : [“co”, “humidity”, “lpg”, “smoke”, “temp”], “human” : “What is the {HUMAN_AGGREGATE} {COLUMN} ?”, “sql” : “SELECT {AGGREGATE}({COLUMN}) FROM devices;” } ] } Would produce : {“context”: “CREATE TABLE IF NOT EXISTS devices (device_id INTEGER, ts TIMESTAMP, co DOUBLE, humidity DOUBLE,light BOOL,lpg DOUBLE,motion BOOL,smoke DOUBLE,temp DOUBLE);”, “question”: “What is the lowest smoke in 2009 for all devices?”, “answer”: “SELECT MIN(smoke) FROM devices WHERE ts > TIMESTAMP(‘2009-01-01T00:00:00Z’) and ts < TIMESTAMP(‘2010-01-01T00:00:00Z’);”} {“context”: “CREATE TABLE IF NOT EXISTS devices (device_id INTEGER, ts TIMESTAMP, co DOUBLE, humidity DOUBLE,light BOOL,lpg DOUBLE,motion BOOL,smoke DOUBLE,temp DOUBLE);”, “question”: “What is the highest humidity in June 2011 for all devices?”, “answer”: “SELECT MAX(humidity) FROM devices WHERE ts > TIMESTAMP(‘2011-06-01T00:00:00Z’) and ts < TIMESTAMP(‘2011-07-01T00:00:00Z’);”} We created five to six templated queries that both did and did not use the TIMESTAMP() function for five different contexts which both did and did not use multiple tables per the Key-Container model and then with the dataset creation tool, generated 100 different question/answer pairs per query for a total of 3600 queries. As automatic dataset splitting resulted in a disproportionate amount of one context over another in the test dataset, a second test dataset was generated with only a single question/answer pair for each templated query. Fine Tuning For both the filtered data set and the generated GridDB data set, each training data item combined into a single string of the format: Tables: {context} Question: {question} Answer: The above string is then tokenized using the HuggingFace AutoTokenizer and used as the input identifier while the answer is tokenized as the labels. After the dataset has been tokenized, it is trained using HuggingFace’s Trainer library. Additional tokens for &lt; and &lt;= need to be added to the tokenizer otherwise those symbols with the SQL statements would be ignored by the model during training. def tokenize_function(example): start_prompt = “Tables:\n” middle_prompt = “\n\nQuestion:\n” end_prompt = “\n\nAnswer:\n” data_zip = zip(example[‘context’], example[‘question’]) prompt = [start_prompt + context + middle_prompt + question + end_prompt for context, question in data_zip] example[‘input_ids’] = tokenizer(prompt, padding=’max_length’, truncation=True, return_tensors=”pt”).input_ids example[‘labels’] = tokenizer(example[‘answer’], padding=’max_length’, truncation=True, return_tensors=”pt”).input_ids return example finetuned_model = AutoModelForSeq2SeqLM.from_pretrained(model_name, torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(tok_model_name) tokenizer.add_tokens([‘<=’, ‘<= ‘, ‘ <=’, ‘ <‘, ‘<‘, ‘< ‘, ‘>= ‘, ‘ >=’, ‘>=’]) finetuned_model.resize_token_embeddings(len(tokenizer)) tokenized_datasets = dataset.map(tokenize_function, batched=True) output_dir = f’./sql-training-{str(int(time.time()))}’ training_args = TrainingArguments( output_dir=output_dir, learning_rate=5e-3, num_train_epochs=2, per_device_train_batch_size=8, # batch size per device during training per_device_eval_batch_size=8, # batch size for evaluation weight_decay=0.01, logging_steps=50, evaluation_strategy=’steps’, # evaluation strategy to adopt during training eval_steps=500, # number of steps between evaluation ) trainer = Trainer( model=finetuned_model, args=training_args, train_dataset=tokenized_datasets[‘train’], eval_dataset=tokenized_datasets[‘validation’], ) trainer.train() Using an AMD Ryzen Threadripper 2990WX with an NVIDIA 4070GTX, training took approximately 3-4 hours to complete for the filtered dataset and under an hour to complete for the generated dataset. Evaluation Using either the 10% test split of the training dataset or the generated test dataset, the same tokenization method was used to build input for the model. The output answer was generated for every input and compared using HuggingFace’s ROUGE evaluation library. try: for stmt in data[context_name].split(“;”): stmt = stmt.strip() table = stmt.split(” “)[2] curs.execute(“DROP TABLE IF EXISTS “+table) curs.execute(stmt) except: pass try: curs.execute(data[answer_name]) good=good+1 print(json.dumps({“question”: data[question_name], “context”: data[context_name], “answer”: data[answer_name]})) except: bad=bad+1 This evaluation was performed for both the original filtered data set and also the generated GridDB specific data set and ROUGE metrics were gathered. ROUGE or Recall-Oriented Understudy for Gisting Evaluation is a set of metrics used to evaluate text transformation or summarization models by comparing human generated baseline answer versus the model generated response. Each ROUGE metric varies from 0 to 1, with 1 being a perfect match. Metric Filtered Queries GridDB Specific Queries ROUGE-1 0.9220341258369449 0.893189189189189 ROUGE-2 0.8328271928176021 0.8556992481203007 ROUGE-L 0.9039756047111251 0.8807387387387388 ROUGE-1 measures the overlap of the words between the original and infered answer. ROUGE-2 refers to the overlap of pairs of words between the reference and infered answer. ROUGE-L measures the longest sequence of words between the reference and infered answer that match. Application Implementation There are many ways to integrate the LLM into an application. LLM inference could be performed on the edge on the user’s device which would allow for greater scalability but also much higher end user system requirements. If the inference is performed on the server side, it can be bundled into the current application or as a separate service that communicates with the current application. This would allow inference to run on dedicated high performance instances and thus inference would have minimal impact on the existing application’s performance. We will directly bundle the LLM into our application into the demo for simplicity’s sake. Now adding the code to use the model in your application is straight forward. The context can be fetched using GridDB’s NoSQL API: containers = [] x = 0 while x < gridstore.partition_info.partition_count: containers.extend(gridstore.partition_info.get_container_names(x, 0)) x=x+1 conts_and_schemas = {} for cont in containers: col_list = gridstore.get_container_info(cont).column_info_list schema = {} for row in col_list: schema[row[0]] = type_mapping(row[1]) conts_and_schemas[cont] = schema create_stmts = [] for key in conts_and_schemas: create_stmts.append(create_table_statement(key, conts_and_schemas[key])) return create_stmts Inference for a single question is performed in a similiar fashion to how evalution was performed. model = AutoModelForSeq2SeqLM.from_pretrained(“griddb_model_2_epoch”) tokenizer = AutoTokenizer.from_pretrained(“t5-small”) def translate_to_sql_select(context, question): prompt = f”””Tables: {context} Question: {question} Answer: “”” input_ids = tokenizer.encode(prompt, return_tensors=”pt”) outputs = model.generate(input_ids) sql_query = tokenizer.decode(outputs[0], skip_special_tokens=True) return sql_query Finally a Flask route gets the local context, calls the model, executes the query, and returns the response. @app.route(‘/nlquery’) def nlquery(): question = request.args.get(‘question’) context = get_local_context() query = translate_to_sql_select(context,question) curs = conn.cursor() try: curs.execute(query) rows = curs.fetchall() return json.dumps(rows) else: abort(400, ‘Generated query was not successful’) While the model is easily incorporated into any Flask or other Python application as shown, scalability may be difficult as each LLM invocation takes approximately 500 milliseconds using an AMD Ryzen Threadripper and NVIDIA 4070GTX. There are other projects such as https://github.com/Ki6an/fastT5 that will greatly improve the scalability of the GridDB LLM model. Conclusion We hope the process of creating a training dataset, performing the training and using the resulting LLM within an application to query your data was insightful and educational. Using LLM, end users including IoT device owners, corporate analysts, managers, customer service, and others are able to query data stored in GridDB without having to know SQL. While the queries used to demonstrate the LLM in this project are relatively simple, the model appears to be extensible to other query types and methods. Furthermore, the T5-small model is efficient to train, not requiring large investments in hardware to to train or run inference on. In the future, with a larger, more diverse training dataset and advancements even in the base model performing natural language queries will become even more commonplace and accurate. The source code used in the project is available at https://github.com/griddbnet/sql_llm_model. The finished model can be downloaded from HuggingFace

More