With the fresh release of the GridDB Community Edition version 4.6, we would like to share with our readers the new features that we are most excited about. We will focus our efforts in highlighting the new SQL aggregate functions as well as the new command line interface (CLI) tool.
To take full advantage of v4.6’s new features, you will also need to install the GridDB CLI tool. The command line tool is a cluster operation control command interpreter, but we will simply refer to it as
gs_sh. The main purpose of this tool is to provide developers a means to manage their GridDB’s cluster and data operations.
There have also been new SQL Aggregation functions added. They won’t be covered here, but you can read about them here in our docs: https://docs.griddb.net/sqlreference/sql-commands-supported/#aggregate-functions
gs_sh is extremely easy as it comes packaged with an RPM.
$ wget https://github.com/griddb/cli/releases/download/v4.6.0/griddb-ce-cli-4.6.0-linux.x86_64.rpm $ sudo rpm -ivh griddb-ce-cli-X.X.X-linux.x86_64.rpm
You will also need to have a
griddb-jdbc.jar set up in your
/usr/share/java/ directory. You can find info about that here: https://griddb.net/en/blog/connecting-to-griddb-via-jdbc-with-sqlworkbench-j/.
Here is a quick summary though:
$ wget https://repo1.maven.org/maven2/com/github/griddb/gridstore-jdbc/4.5.0/gridstore-jdbc-4.5.0.jar
You can make sure you’re using the latest version of the jdbc file by checking GitHub.
That should be about it on the installation side.
Cluster Operation Control Command Interpreter (gs_sh)
As stated earlier, the tool exists to provide developers a means to manage their GridDB cluster and data operations. This means that using this tool will allow
- Displaying cluster status
- Database and user management
- Displaying collections and triggers
- Index setting, deleting index
- Search using TQL/SQL
Let’s just dive right in. To do so, just type directly into your shell:
$ gs_sh gs> version gs_sh-ce version 4.6.0
Now that you’ve confirmed it works, let us first tackle some prerequisite steps to using this feature. For this section you will need to create a new gs user, one other than the default
gsadm user and communicate to your tool your GridDB cluster definitions.
Defining GridDB Cluster
Method One: Sync
There are a couple of ways to define your nodes and cluster. If you are coming into v4.6 with a previously used GridDB instance, this first method will be easier as it uses your already defined GridDB Cluster. This method will have you simply sync over your
conf/gs_cluster.json configuration. So, drop into the shell, and after you set your username and password, you can
sync your config:
sync <IP Address> <Port Number> <Cluster var name> <node variables>
gs> setuser admin admin gs> sync 127.0.0.1 10040 defaultCluster node0
And then you can run the
show command to make sure the details are correct:
Node variable: node01=Node[10.0.1.6:10040,ssh=22] Cluster variable: defaultCluster=Cluster[name=defaultCluster,mode=MULTICAST,transaction=126.96.36.199:31999,sql=188.8.131.52:41999,nodes=($node01)] Other variables: user=admin password=***** ospassword=
Note: these above settings are the “default” settings indicated by the guides over at https://docs.griddb.net.
Method Two: Start from Scratch
The 2nd method is more involved as it allows you to set all configurations prior to starting up your cluster. Instructions to follow:
First we will need to explicitly define our current cluster’s configuration. So after dropping into the gs shell, define your nodes like so:
setnode <Node variable> <IP address> <Port no.> [<SSH port no.>]
And a specific example:
gs> setnode node0 192.168.0.1 10000 gs> setnode node1 192.168.0.2 10000
Once done there, we can set any of the following cluster config: multicast, fixed list, or provider method. In this example, we will set it to multicast.
The full command looks like this:
setcluster <Cluster variable> <Cluster name> <Multicast address> <Port no.> [<Node variable> ...]
gs> setcluster cluster0 name 184.108.40.206 1000 $node0
If you want to add/remove nodes:
gs> modcluster cluster0 add $node1 gs> modcluster cluster0 remove $node1
You need a new user to be able to connect to each GridDB node environment (explained below) as a user other than the admin user
gsadm . To do so:
setuser <Username> <Password>
gs> setuser admin admin
And please note, only one user can be set at a time. If you wish to change the user/password, simply rerun the above command.
You can set your own variables to keep valuable information handy, such as declaring your
gs> set GS_PORT 10000
And then you can show all variables and other useful items with
Node variable: node0=Node[192.168.0.1:10000,ssh=22] node1=Node[192.168.0.2:10000,ssh=22] Cluster variable: cluster0=Cluster[name=name,mode=MULTICAST,transaction=220.127.116.11:1000,nodes=($node0)] Other variables: user=admin password=***** ospassword= GS_PORT=10000
Once you are happy with your cluster definition, you can save your settings into a script:
gs> save test.gsh
And then of course, to load:
gs> load test.gsh
And as a small note, make sure when you run the
gs_sh command, you have proper admin rights. If you get privilege errors about not being able to write to the log, the saving and loading of
.gsh files will not work.
Once you’ve got your script file saved, the next time you launch into the shell, you can simply add it as one of the options to run when beginning the process:
$ gs_sh test.gsh
Cluster Operation Controls
Here is a generalized list of what can be done with these controls:
- Displaying SQL processing under execution (
- Displaying executing event (
- Displaying connection (
- SQL cancellation (
killsql <query ID>)
For example, if you’ve got a query currently being executed, you can show it by simply typing:
an example output:
======================================================================= query id: e6bf24f5-d811-4b45-95cb-ecc643922149:3 start time: 2019-04-02T06:02:36.93900 elapsed time: 53 database name: public application name: gs_admin node: 192.168.56.101:10040 sql: INSERT INTO TAB_711_0101 SELECT a.id, b.longval FROM TAB_711_0001 a LEFT OU job id: e6bf24f5-d811-4b45-95cb-ecc643922149:3:5:0 node: 192.168.56.101:10040 #---------------------------
Data Operation in a Database
For the purposes of this blog, I am using the dataset from this other blog.
To begin, you will need to connect to the cluster which you intend to operate.
gs> connect $cluster0
Note here that the cluster name does matter, so in my specific case (following the sync instructions above), I ran :
gs> connect $defaultCluster
The connection attempt was successful(NoSQL). The connection attempt was successful(NewSQL).
Now that we are connected, we can try running some queries.
To run a TQL Command via shell:
gs[public]> tql LosAngelesNO2 select *; 4,344 results. (0 ms) gs[public]> get 1 timestamp,notwo 2019-01-01T00:00:00.000Z,1.4 The 1 results had been acquired.
And for SQL:
gs[public]> sql select * from LosAngelesNO2; 4,344 results. (1 ms) gs[public]> get 10 timestamp,notwo 2019-01-01T00:00:00.000Z,1.4 2019-01-01T01:00:00.000Z,1.6 2019-01-01T02:00:00.000Z,3.5 2019-01-01T03:00:00.000Z,1.3 2019-01-01T04:00:00.000Z,1.3 2019-01-01T05:00:00.000Z,1.0 2019-01-01T06:00:00.000Z,1.9 2019-01-01T07:00:00.000Z,3.0 2019-01-01T08:00:00.000Z,3.4 2019-01-01T09:00:00.000Z,1.5 The 10 results had been acquired.
The first command is a search, and the second actually produces the results.
You can also save your queries into a CSV file:
gs[public]> getcsv test.csv 1000 The 1,000 results had been acquired. The 1,000 results had been acquired.
And when finished, you can just close all dangling connections:
gs[public]> tqlclose gs[public]> queryclose
gs[public]> disconnect Disconnected connection attempt was successful.
You can also create/delete/update your containers via the shell.
To create a collection container:
createcollection <Container name> <Column name> <Column type> [<Column name> <Column type> ...]
gs[public]> createcollection test col01 string
And a timeseries collection:
createtimeseries <Container name> <Compression method> <Column name> <Column type> [<Column name> <Column type> ...]
gs[public]> createtimeseries testTS NO colTS timestamp
and then to drop a container:
gs[public]> dropcontainer col01
Another useful container feature is the
gs[public]> showcontainer Database : public Name : LosAngelesNO2 Type : TIME_SERIES Partition ID: 24 DataAffinity: - Compression Method : NO Compression Window : - Row Expiration Time: - Row Expiration Division Count: - Columns: No Name Type CSTR RowKey Compression ------------------------------------------------------------------------------ 0 timestamp TIMESTAMP NN [RowKey] 1 notwo DOUBLE
To search through your containers and find them:
gs[public]> searchcontainer LosAngelesNO2 LosAngelesNO2 gs[public]> searchcontainer LosAngeles% LosAngelesNO2 LosAngelesCO
Note: similar commands exist for rows:
Creating indexes is simple:
createindex <Container name> <Column name> <Index type> ...
gs[public]> createindex LosAngelesNO2 notwo tree gs[public]> showcontainer LosAngelesNO2 Database : public Name : LosAngelesNO2 Type : TIME_SERIES Partition ID: 24 DataAffinity: - Compression Method : NO Compression Window : - Row Expiration Time: - Row Expiration Division Count: - Columns: No Name Type CSTR RowKey Compression ------------------------------------------------------------------------------ 0 timestamp TIMESTAMP NN [RowKey] 1 notwo DOUBLE Indexes: Name : Type : TREE Columns: No Name -------------------------- 0 notwo
And then to delete an index:
gs[public]> dropindex LosAngelesNO2 notwo tree gs[public]> showcontainer LosAngelesNO2 Database : public Name : LosAngelesNO2 Type : TIME_SERIES Partition ID: 24 DataAffinity: - Compression Method : NO Compression Window : - Row Expiration Time: - Row Expiration Division Count: - Columns: No Name Type CSTR RowKey Compression ------------------------------------------------------------------------------ 0 timestamp TIMESTAMP NN [RowKey] 1 notwo DOUBLE
Though there are a few more features which did not make it to this blog, this will give you enough information to be excited to use all of the new features included with the latest release. To read more about the full feature list, you can check out the full specification list on GitHub.