Getting Started with the GridDB NodeJS Client

Introduction

GridDB has released a database connector for the extremely popular node.js and just like the Golang connector, it was built using the SWIG library. The node.js client supports node.js version 6 running on CentOS 6&7.
Node.js’s initial release was in 2009 and after nearly a decade, it has allowed JavaScript to become the most widely used language today. With node.js’s still surging relevance in the development world, this connector was a high priority for the GridDB team.
This blog will go through installing, setting up and testing the database connector for GridDB.

Node.js Client Setup and Installation

To build the node client on your system, you will first need to have the GridDB C client built and installed. You can follow this blog post on how to set up and test the C client if you are not familiar. There are also easy instructions found on the GitHub page itself.
Once the C Client is confirmed to be working, we will need to clone the GitHub repository for the node.js client.

$ git clone https://github.com/griddb/nodejs_client

Now that you have obtained the source code from GitHub, you can simply follow the instructions in the README on the Node.js Client’s Github page.
The official README document from GitHub indicates that both SWIG and PCRE need to be built and confirmed working prior to building the node.js client.

$ wget https://sourceforge.net/projects/pcre/files/pcre/8.39/pcre-8.39.tar.gz
$ tar xvfz pcre-8.39.tar.gz
$ cd pcre-8.39
$ ./configure
$ make
$ make install
$ wget https://prdownloads.sourceforge.net/swig/swig-3.0.12.tar.gz
$ tar xvfz swig-3.0.12.tar.gz
$ cd swig-3.0.12
$ ./configure
$ make
$ make install
Possible Issues When Building the Node.js Client

Occasionally, this error may occur when issuing make on the Node.js client package:

$ make
error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory

This may be an issue with your paths. If you followed the node.js_client’s GitHub instructions, you should have installed pcre and swig before attempting to make. An issue can occur where your LIBRARY_PATH is not pointing to pcre, thus not allowing the SWIG library to work. You can easily test if this is the case:

$ swig -version
swig: error while loading shared libraries: libpcre.so.1: cannot open shared object file: No such file or directory

If this error pops up, you simply run:

$ find / -name libpcre.so.1

and note the location of the file. Once found, you can run:

$ export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH

Your command of swig -version should now work. Try to make make again.
If another issue pops up:

 g++ -fPIC -std=c++0x -g -O2 -Iinclude -Isrc -Iusr/include/node -DNODE_GYP_MODULE_NAME=griddb -DV8_DEPRECATION_WARNINGS=1 -DBUILDING_NODE_EXTENSION -c -o src/griddb_js.o src/griddb_js.cxx
src/griddb_js.cxx:171:18: fatal error: node.h: No such file or directory
 #include 
                  ^
compilation terminated.
make: *** [src/griddb_js.o] Error 1

You may need to install the node.js development tools and to need to edit your Makefile and change the location of the node.h file:

$ sudo yum install nodejs-devel
$ cd ~/nodejs_client
$ vi Makefile
INCLUDES_JS += -I/usr/include/node

The node.js client should now compile properly. If issues arise, please post on our forum for support.

Connecting to GridDB with JavaScript

And now that the prep-work is done, we can connect to GridDB with JavaScript. To do this, it is the same process as with other packages (“require”).
Here’s an example of connecting a GridDB instance:

var griddb = require('griddb_node');
var fs     = require('fs');
var factory = griddb.StoreFactory.getInstance();
var store = factory.getStore({
                        "host": process.argv[2],
                        "port": process.argv[3],
                        "cluster_name": process.argv[4],
                        "username": process.argv[5],
                        "password": process.argv[6]});

The first two lines are standard node.js boilerplate. The variable store uses node.js command line (CL) arguments. When running a generic node file, you run in your terminal:

$ node file1.js

So, to connect to your GridDB cluster and run the code at the same time, you need to implement the CL arguments.

$ node blog.js 239.0.0.1 31999 temperature admin admin

Creating Schemas and Containers

Schemas for containers and rows are created by calling the griddb.ContainerInfo object in conjunction with the store.putContainer object; more information can be gathered in the node.js API Reference.
For example, to create a new container, first the column types need to be set via creating a new instance of a griddb.ContainerInfo object:

var conInfo = new griddb.ContainerInfo("col01",
                   [
                        ["name", griddb.GS_TYPE_STRING],
                        ["status", griddb.GS_TYPE_BOOL],
                        ["count", griddb.GS_TYPE_LONG],
                        ["lob", griddb.GS_TYPE_BLOB]
                   ], griddb.GS_CONTAINER_COLLECTION,
                   true)

The first parameter, a string is the container name. The rest are the column names along with that column’s type.
Rows can be added onto columns by utilizing the griddb.Container put method.
For example:

var col2;
store.putContainer(conInfo, false)
	.then(col => {
	 	col2 = col;
		col.createIndex("count", griddb.GS_INDEX_FLAG_DEFAULT);
		return col;
  	})
	.then(col => {
		col.setAutoCommit(false);
		col.put(["name01", false, 1, "ABCDEFGHIJ"]);
		return col;
	})
// The rest of the object was cut off for brevity's sake

The first parameter for the putContainer method takes container information (conInfo). It then deals with JavaScript promises to create indexes and populate columns and data. col.put is explicitly putting a row of data into the column called “name” which takes type STRING.

Querying and Retrieving Data

Once you have your containers populated with data and inserted into GridDB, you are ready to query and fetch your data. Similar to the Python or Java APIs, all you need to is to construct a Query object (or griddb_go.Query type) with the TQL query you would like to issue to your container. Once that is done, simply fetch the query’s results and store it in a griddb_go.RowSet object.
Once the containers have been populated with relevant data, the next step will surely be querying. To query data using the node.js client API, we simply call query method from the container object. GridDB uses TQL, a SQL-like language, to conduct its queries. For example:

var col2;
store.putContainer(conInfo, false)
        .then(col => {
                col2 = col;
                col.createIndex("count", griddb.GS_INDEX_FLAG_DEFAULT);
                return col;
        })
        .then(col => {
                col.setAutoCommit(false);
                col.put(["name01", false, 1, "ABCDEFGHIJ"]);
                return col;
        })
        .then(col => {
                col.commit(); //commits calls are required to commit the current transaction and start a new transaction
                return col;
        })
        .then(col => {
                // container.query takes the raw TQL
                query = col.query("select *");
                // the fetch method returns the results in the form of RowSet
                return query.fetch();
        })
        .then(rs => {
                // RowSet is the result from the query.fetch
                while (rs.hasNext()) { // While the row has another row after the current
                        console.log(rs.next());//print that row
                }
                col2.commit();
        })
        .catch(err => {
                console.log(err.what());
                for (var i = 0; i < err.getErrorStackSize(); i++) {
                        console.log("[", i, "]");
                        console.log(err.getErrorCode(i));
                        console.log(err.getMessage(i));
                }
        });

The above code snippet has comments explaining the pertinent portions. To query your GridDB containers/columns, you need to use a combination of the query and fetch methods, along with the next() method from the RowSet object.
When the client returns results from a query, it will return a RowSet promise. With this promise, the typical workflow is to run a while loop, checking for while RowSet.hasNext() print RowSet.next().

Aggregation TQL

You can also use TQL to create and issue aggregation queries. The methods of performing and fetching griddb_go.AggregationResult from containers is fairly similar to the Python API. Once you have obtained the RowSet fetched from your query, instead of using rowSet.NextRow() we simply change it to .NextAggregation(). With that AggregationResult object, we use the .Get() to get the type of number we need.
TQL also allows the ability to issue aggregation queries. To do this, you simply issue your raw TQL query with the proper aggregation operations .
Here's an example:

var timeseries;
store.getContainer("point01")
        .then(ts => {
        timeseries = ts;
        query = ts.query("select * from point01 where not active and voltage > 50");
        return query.fetch();
    })
        .then(rowset => {
        var row;
        while (rowset.hasNext()) {
            row = rowset.next();
            var timestamp = Date.parse(row[0]);
            aggCommand = "select AVG(voltage) from point01 where timestamp > TIMESTAMPADD(MINUTE, TO_TIMESTAMP_MS(" + timestamp + "), -10) AND timestamp < TIMESTAMPADD(MINUTE, TO_TIMESTAMP_MS(" + timestamp + "), 10)";
            aggQuery = timeseries.query(aggCommand);
            aggQuery.fetch()
                .then(aggRs => {
                    while (aggRs.hasNext()) {
                        aggResult = aggRs.next();
                        console.log("[Timestamp = " + timestamp + "] Average voltage = "+ aggResult.get(griddb.GS_TYPE_DOUBLE));
                    }
                });
       }
    })

As shown in the example above, once the aggregation result is ready to be run after being queried, the API has a special AggregationResult.get(Type type) method.

Handling Timestamp Data

Because GridDB is a database with some emphasis on TimeSeries data and with aspirations of IoT superiority, we need to touch on dealing with TimeSeries data. Luckily for us, native JavaScript is already quite adept at dealing with timestamp data. No special functions required.

Conclusion

Now that the blog is complete, the hope is that reading this should lend you enough of a helping hand to go out there and begin developing using node.js and GridDB. Of course, if any questions arise, we wholly encourage use of the GridDB Forum.

Reference

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.