Problems related to cluster configuration
Causes Symptoms Countermeasures
Category 1 Category 2
Process cannot start normally Recovery failure [Error code] From 20000
[Description] Recovery-related error group
[Message] In accordance with the format of the recovery error category
Possible causes of the error could be mistakes in the operating procedure or internal faults. See “Problems related to recovery process” for the details.
Error in the contents described in the definition file (gs_cluster.json, gs_node.json) [Server] 100002, 100003, 100004, 100005
[Description] Configuration setting failure
[Message] Regarding the initial error which detected the failure, the failure cause will be displayed, be it the omission of an essential item, the value lying outside the limit range, or a discrepancy error in the data type.
Error in the gs_node.json/gs_cluster.json settings. Check the error message and manual to see if there is any omissions of essential items in the definition file, value limits and data type discrepancy issues and then correct them before restarting.
Service address described in gs_node.json failed to resolve DNS name. [Server] 130000
[Description] Service address setting failure
[Message] Platform error no. resulting in failed address, port, error (in many cases -2 (EAI_NONAME))
Check whether each serviceAddress in gs_node.json is normal or not. If it has not been set, the address listed in /etc/hosts will be used so check this as well.
Error in the multicast address described in gs_cluster.json [Server] 130006
[Description] Multicast address setting failure
[Message] Platform error no. resulting in failed address, port, error (22 (EINVAL))
Check whether each notificationAddress of gs_cluster.json is normal (224.0.0.0 to 239.255.255.255) or not. It is recommended that the port be split but this will work even if it is duplicated.
Multicast cannot be executed due to a defect in the environmental settings. [Server] 130006
[Description] Multicast address setting failure
[Message] Platform error no. resulting in failed address, port, error (19 (ENODEV))
Initially, check whether a valid network connection other than a loopback (lo) exists, and whether it is valid or not. Next, check the multicast routing setting. If it is not specified in particular, the default gateway setting will be used, but check whether this setting is correct or not.
Server and database versions do not match [Server] 80022
[Description] Database version discrepancy
[Message] Module Row no. Error code Log file version

Check the version no. stated in the error message, and transit to a database with a valid version. See “Compatibility between client and DB” for details regarding valid versions and the “GridStore migration procedure” for the migration method.
During the process start-up, the node status changes to ABNORMAL. One of the ports in gs_node.json is duplicated in this file or it is the same as the port of a process that is already being started by the same machine. [Server] 130024
[Description] Socket bind failure
[Message] Failed address, port, time required for retry (1 minute by default)
[External tool] Use netstat, port scan, etc. to check for duplication and search for empty ports.
Use an external tool, etc. to check that the port listed in gs_node.json is not covered, and restart after correcting it to an appropriate value.
Cluster is not composed even after a fixed period of time has passed. Discrepancy in the values of gs_cluster.json or the cluster name and number of nodes constituting a cluster, among the nodes constituting the cluster. [Server] 180046 to 180050
[Description] Cluster configuration failure
[Message] Reason why connection destination address no., cluster could not be composed (cluster name discrepancy, configuration discrepancy, etc.)
Check that all values of gs_cluster.json as well as the cluster name and number of nodes constituting a cluster that are specified in gs_joincluster are correct among all the constituting nodes.
Discrepancy in the valid binary version of the server among clusters [Server] 40044
[Description] Server version discrepancy
[Message] Connection destination address no., own server version no., connection destination server version no.
Check the version no. listed in the error, and use a valid version of the server binary to restart the cluster and recompose the cluster again.
Master node of current cluster detected that another cluster with the same cluster name already exists (with the same multicast address). [Server] 180018
[Description] Duplicated cluster detection
[Message] Connection address, port no. of master node in an existing cluster

As data may be destroyed if the system continues to operate even after detecting a duplication, either stop the cluster on the side which detected the duplication (gs_stopcluster), or stop the cluster at the notification destination (repeat this if there are 2 or more clusters), and make adjustments so that clusters with the same cluster name become one on the same subnet. When setting the cluster name of gs_cluster.json, it is strongly recommended that a unique cluster name (and multicast address) be set.
Service address in gs_node.json is not valid [Server] Although no traces remain in particular, a cluster of 2 or more nodes cannot be composed even after a fixed period of time has passed Check the legitimacy of the address of each service. Service will start if there is only 1 node but if there are 2 or more nodes, the cluster cannot be composed as communications cannot be carried out among the nodes.
Multicast distribution to compose a cluster does not physically arrive. [External tool] Check tcpdump and netstat to see whether the multicast address and port specified in the notification port and /cluster/notificationAddress of gs_cluster.json are being received.  
Check that all the nodes constituting a cluster have been started in the same subnet. If a connection between different subnets is necessary, use an external tool to connect. In addition, check the settings on the other end as well since the relevant multicast may have been cut off by a firewall.
 
Failure to connect client to cluster Crossed wires at the connection destination due to clusters having the same cluster name in the same network [Server] Cluster is uncertain due to failure to get container, etc., but in this case, the following is recorded by either one of the nodes in the cluster
[Server] 10005
[Description] Duplicated cluster detection
[Message] Connection address, port no. of master node in an existing cluster
First, check that the cluster name in unique in the same subnet. Then, check that the cluster name is the same as the name of the cluster to be connected to the client program.
Binary version is not inconsistent between the client and cluster [Server] 10054
[Description] Client version discrepancy
[Message] Connection destination address no., version no. of own client that can be connected, connection destination client version no.
Check the version no. stated in the error message, and use a client with a valid version. See “Compatibility between client and DB” regarding the valid versions.
A client failover timeout has occurred. [Server] 10008, 10009, 10010
[Description] Failover connection failure
[Client] Client failover timeout
This trace may be recorded each time a retry is attempted by a client. The log above will be left behind if a connection had been established during the client failover timed out, but this is not a problem. If a timeout has occurred, the last failure cause will be noted down. See “Problems related to client failover”.
Multicast distribution for cluster discovery of a client does not physically arrive. [External tool] Check tcpdump and netstat to see whether the multicast address and port specified in the notification port and /transaction/notificationAddress of gs_cluster.json are being received. Check that the client have been started in the same subnet as the subnet in which the cluster exists. If a connection between different subnets is necessary, use an external tool to connect. In addition, check the settings on the other end as well since the relevant multicast may have been cut off by a firewall.
Database version discrepancy Server and database versions do not match [Server] 80022
[Description] Database version discrepancy
[Message] Module Row no. Error code Log file version

Check the version no. stated in the error message, and transit to a database with a valid version. See “Compatibility between client and DB” for details regarding valid versions and the “GridStore migration procedure” for the migration method.
See “Annex”, “Parameter List” in the “GridStore quick start guide” for the gs_node.json and gs_cluster.json settings.
A “Migration procedure” is provided in basic support services. Note that the procedure is not enclosed with the installation media and package.
   Problems related to cluster expansion, reduction
Causes Symptoms Countermeasures
Category 1 Category 2
Command will not be executed normally as the conditions to add or detach a node are not satisfied Number of nodes already participating in a cluster does not match the number of nodes constituting a cluster [Server] 180055
[Description] Status does not allow cluster addition/detachment
[Format] Number of nodes already participating in a cluster, number of nodes constituting a cluster of the current cluster
[Check] gs_stat /cluster/activeCount ! = /cluster/designatedCount

Check whether a failure has occurred in a node within the cluster. If a failure has occurred, first try to see if the node can be restored in the cluster. If it cannot be restored, get ready a new node separately, and let the node join the node. After this task, check the gs_stat to see whether the number of nodes already participating in a cluster (activeCount) matches the number of nodes constituting a cluster (designatedCount) and then execute a cluster addition/detachment command again.
The number of nodes trying to join the cluster is more than the number of nodes constituting the cluster [Server] 180045
[Description] Failure to compose cluster as the number of nodes exceeds the number of nodes constituting the cluster
[Format] Number of nodes constituting a cluster of the current cluster and master address in the node to be added

Status does not allow new nodes to join as the cluster being composed has already reached the upper limit (= number of nodes constituting the cluster). If you want to further increase the number of nodes constituting a cluster, execute a gs_appendcluster command on the node to be added to the cluster.
By allowing the target node to leave the cluster, the current cluster will lose data or the cluster will be dissolved. [Server] 180030
[Description] Node cannot leave the cluster
[Format] Reason why a node cannot leave a cluster (cluster cannot maintain a majority of the nodes when a target node leaves the cluster, and data loss occurs when even only one node leaves the cluster)
If there is a risk of data being lost or the cluster being dissolved due to the execution of a gs_leavecluster command, the node will not be allowed to leave the cluster. If you want to force a node to leave the cluster, append the --force option before executing the command. However, in this case, do not append the --force option where possible as the cluster will be reset and data may be lost.
   Problems related to client failover
Causes Symptoms Countermeasures
Category 1 Category 2
Errors caused by resource abnormalities A failure has occurred in the network between the client and cluster. [Server] 130008
[Description] Network connection error
[Format] Communication failure cause

If the failure is temporary, set the timeout so that it is settled within the failover timeout period. If the failure occurs regularly, consider making the network redundant and so on.
Error caused by timeout setting Either the server load is very high, or the load of the relevant transaction itself is high [Server] 50000, 50001
[Client] 70000
[Description] Transaction timeout
[Format] Timeout elapsed period, partition no., connection address, failure cause

Check whether 1 statement of the application subject to execution can be executed within the transaction timeout period and set up a suitable value. This phenomenon occurs even if the load of the entire server becomes high temporarily as well. Check the output of the total space secured by the communication message (/performance/memoryDetail/work.transactionMessageTotal) regularly to see whether it has become larger by executing a gs_stat command appended with the --memoryDetail option. In particular, processing may become concentrated temporarily and the load may increase on the backup end during asynchronous replication. Change to the semi-synchronous replication mode if a timeout occurs because of these causes.
A deadlock or extended lock is maintained in the application. [Server] 50000, 50001
[Client] 70000
[Description] Transaction timeout
[Format] Timeout elapsed period, partition no., connection address, failure cause

Check the output value of the total space secured by the communication +E6 message (/performance/memoryDetail/work.transactionMessageTotal) by executing a gs_stat command appended with the --memoryDetail option. If this does not reduce regularly, a dead lock or lock standby will occur. Review the application and terminate if necessary. Adopt the appropriate measure as the lock will not be released by the server side if no limit is set.
A failover timeout has occurred [Client] 70000
[Description] Client failover timeout
[Format] Timeout elapsed period, partition no., connection address, failure cause

For large-scale data in which a cluster failover takes time to occur, especially when a single row such as BLOB, etc., is extremely large, the failover (synchronization process during execution) may take a while to complete, so set the failover timeout longer. In addition, as failure detection is carried out at the heartbeat interval, if the heartbeat interval is large, it may take a while before the failure is detected. The failover time will become longer in this case as well.
Error caused by a replication failure Failover process started with the replication process not completed normally [Server] 50002
[Description] Update operation continuity check error
[Format] Partition no., connection address, failure cause
This symptom may appear when backup data is missing due to the timing of the node failure and the timing that the message is received or sent in the replication process. As the probability of this symptom appearing is especially high in asynchronous replication, it is recommended that the trade-off with performance be considered during cluster operation in the semi-synchronous replication mode if availability is a priority.
Error caused by a stop in the data service at the failover destination The status of a cluster which used to be valid at the failover start point is reset during a failover, becoming a sub-cluster status. [Server] 10010
[Description] Access when a cluster is not composed yet
[Format] Partition no., connection address, failure cause
See “Problems related to cluster failure” for details on the causes of cluster failure. If a cluster configuration is reset due to half or more of the nodes being down, get new nodes ready, and return the cluster to a state in which the number of nodes constituting a cluster can be secured.
Node failure has occurred simultaneously in nodes exceeding the number of replicas set in gs_cluster.json.
[Server] 10007
[Description] (Data service stopped due to detection of data lost)
[Format] (*Master node only) Partition no., LSN (Log Sequence Number) of the latest data including the down node in corresponding partition, largest LSN in the current cluster, node address presumed to hold the latest data (however, reliability is not guaranteed)
[Command check] The same data as the error description above can be acquired with gs_partition --loss
As the cluster has detected that data consistency will break down due to continued operation, data service will be stopped for the partition concerned. Although there is a trade-off between the availability and performance, if availability is a priority, set the number of replicas in gs_cluster.json to be the same or higher than the number of nodes that are expected to be down simultaneously.
The number of nodes which failed simultaneously was equal to or less than the number of replicas set in gs_cluster.json, but there was a partition in which the number of replicas was temporarily insufficient at the point that the failure occurred.
[Server]: 10007
[Description] (Data service stopped due to detection of data lost)
[Format] (*Master node only) Partition no., LSN of the latest data including the down node in corresponding partition, largest LSN in the current cluster, node address presumed to hold the latest data
[Command check]
Use gs_partition --loss to check after the occurrence. If the system is operating with an insufficient number of replicas, check the current availability as REPLICA_LOSS will appear in gs_stat /cluster/partitionStatus. If you want to know the individual partition status, use a gs_partition to check the number of replicas for each partition.
In order to reduce the downtime during a failover in GridStore, if the cluster deems that a certain amount of time is required (determined by whether the applicable log is large or not) when synchronizing a certain partition, it will synchronize only the group of nodes that can be synchronized within a short time first before partially starting the data services. In this case, although the number of replicas created by the asynchronous execution in the background is insufficient, the possibility will be lowered until they are recovered. Therefore, even if the number of replicas provisionally set up in gs_cluster.json is sufficient, note that this status will result if replica recovery is not carried out in time in the background.
A stationary configuration error such as an unstable heartbeat has been detected in a cluster, and failover is repeated. [Server]: 50003
[Description] (Access when a cluster is not composed yet)
[Format] Partition no., connection address, failure cause
[Server]: 50004
[Description] (Access when a cluster is being composed)
[Format] Partition no., connection address, failure cause
Cluster failure occurs regularly, making the cluster unstable. See “Problems related to cluster failure”.
When the latest data exists in the distribution address even though operation continues as there are clusters in which a majority of the nodes can be secured after a network disruption occurs [Server]: 10007
[Description] (Data service stopped due to detection of data lost)
[Format] (*Master node only) Partition no., LSN of the latest data including the down node in corresponding partition, largest LSN in the current cluster, node address presumed to hold the latest data
Network disruption has the same symptoms as when a disrupted node is down. If the latest data exists in the disruption address, the data service of the relevant partition will be stopped temporarily but once the disrupted network returns to normal, the data service stopped automatically will be restarted.
   Problems related to cluster failure
Causes Symptoms Countermeasures
Category 1 Category 2
There are no changes in the cluster configuration, but a failure resulting in a stop of the data service in some of the partitions or partial operation occurs due to an insufficient number of replicas. The cluster deemed that the current owner or a large replication delay among the backups has failed and excluded them from the backup. [Server]: 10011
[Description] (Backup error detected)
[Format] (*Master node only) Partition no., owner address, owner LSN, backup address, backup LSN
 
Replication delays tends to occur easily especially in the asynchronous replication mode. If the delay occurs frequently, either change to the semi-synchronous replication mode (set /transaction/replicationMode in the gs_cluster.json file to 1), or add /cluster/ownerBackupLsnGap to the gs_cluster.json file and make the value larger than the default value (50000) (*1).
Example:
"cluster":{
 "ownerBackupLsnGap":"100000",
    :
}
 
As synchronization did not complete before the process timeout, (partial) operation was started for some of the data services for the specified number of replicas or less. [Server]: 10016
[Description] (Synchronization timeout detected)
[Format] (*Owner node only) Partition no., owner address when operation starts partially
Trade-off with app downtime, but if availability is prioritized, increase the synchronization timeout time (/sync/timeoutInterval in the gs_cluster.json file). This value serves as a guide for the maximum downtime.
Data service of the relevant partition has been stopped due to a data non-conformance which occurred when service is continued even though a node holding the latest data of a certain partition is down. [Server]: 10007
[Description] (Data service stopped due to detection of data lost)
[Format] (*Master node only) Partition no., LSN of the latest data including the down node in corresponding partition, largest LSN in the current cluster, node address presumed to hold the latest data
See “Problems related to client failover” as well.
In addition, the probability of occurrence can be lowered by increasing the no. of replicas or changing to semi-synchronous replication (set /transaction/replicationMode to 1 in the gs_cluster.json file).
A failure requiring a change in the cluster configuration has occurred. Master node has detected that a follower node is down, or the node status has changed to ABNORMAL, or a gs_leavecluster has been executed. [Server]: 10010
[Description] (Start failover)
[Format] (*Master node only) List of nodes with detected errors, failover no.
Check the event log and error code of the respective node for the respective error description.
A heartbeat error due to network failure or a large delay occurring has been detected. [Server]: 10008, 10009
[Description] (Heartbeat timeout)
[Format] (*Master node only) Node with detected error, heartbeat limit time, final heartbeat arrival time
[External tool] Check whether it is within the network bandwidth

Lengthen the heartbeat interval if it appears that the isolation occurs regularly instead of intermittently However, as failure error detection and recovery will become late if the heartbeat interval is too long, take the trade-off with availability into consideration when setting the heartbeat interval. If the delay is due to the network bandwidth, the probability of occurrence can also be lowered by setting each serviceAddress/servicePort in gs_cluster.json separately from other networks.
A heartbeat error due to a high load on resources other than the network has been detected. [Server]: 10008, 10009
[Description] (Heartbeat timeout)
[Format] (*Master node only) Node with detected error, heartbeat limit time, final heartbeat arrival time
[External tools] Resource investigation tool

Lengthen the heartbeat interval if it appears that the isolation occurs regularly instead of intermittently Unlike a network, this could be due to a variety of reasons, e.g. a swap due to insufficient memory, waiting for disk I/O, server is busy due to the execution of an application with a high load, start-up of another app within the same machine, and so on. Besides gs_stat, check the resource status of the entire machine concerned as well.
A failure to maintain the cluster configuration has occurred.
Cluster has been reset due to a majority of the nodes leaving the cluster or errors being detected in a majority of the nodes.
[Server]: 10014
[Description] (Cluster breakup as a majority of the nodes cannot be secured)
[Format] (*Master node only) Number of nodes required to maintain a cluster, number of nodes already participating in a cluster
Detection of heartbeat error is also recorded just before the event.
[Server]: 10008, 10009
[Description] (Heartbeat timeout)
[Format] (*Master node only) Node with detected error, heartbeat limit time, final heartbeat arrival time
Recover nodes which are down due to a failure occurring. If recovery is not possible, start a new node separately, and let the cluster recover so that the number of nodes constituting the cluster is reached. However, since there is also a possibility that the latest data may be retained in a down node, check using “gs_partition --loss", etc. when adding a node.
A network disruption occurred in a cluster that was in operation and a cluster re-configuration was attempted after that but a majority number of nodes could not be secured in any cluster.
[Server]: 10014
[Description] (Cluster breakup as a majority of the nodes cannot be secured)
[Format] (*Master node only) Number of nodes required to maintain a cluster, number of active nodes
Detection of heartbeat error is also recorded just before the event.
[Server]: 10008, 10009
[Description] (Heartbeat timeout)
[Format] (*Master node only) Node with detected error, heartbeat limit time, final heartbeat arrival time
When a network disruption occurs, the cluster will be automatically restarted at the point the disruption is recovered, but if there is no likelihood of recovery after the disruption, the number of constituting nodes needs to be manually reduced to re-constitute the cluster. However, since there is also a possibility that the latest data may be retained in a distribution destination node, check using “gs_partition --loss", etc. when adding a node.
Failure to rebalance (replica creation for nodes with insufficient replicas and uniform distribution of replicas among nodes)
Checkpoint competition, etc. occurred and rebalancing process under execution could not be continued. [Server]: 10012
[Description] (Rebalance failure)
[Format] Failed partition no., partition group no., checkpoint no., failure cause

If a checkpoint is executed during rebalancing, the data file may be updated and the log file may also be deleted. In this case, the rebalancing process under execution will be cut off midway. However, even if the process were to be cut off, checking and retry will be carried out regularly. However, since time loss occurs, the probability of occurrence can be reduced by increasing the checkpoint time and the number of logs maintained beforehand.
Rebalancing could not be completed within the rebalance timeout period. [Server]: 10014
[Description] (Rebalance timeout)
[Format] (*Master node only) Timeout detected partition no., timeout time
Increase the rebalance timeout time.
A failure to continue has occurred in a cluster node.
Node stopped due to disk full. [Server]: Each platform error no.
[Description] Platform error (disk full)
[External tools] Check with df, du
Either increase the number of disks or get ready new nodes which can secure a new disk area.
Node stopped due to disk I/O error [Server]: Each platform error no.
[Description] Platform error (disk I/O)
Several scenarios are possible e.g. a physical disk failure, a file write failure due to a resource exhaustion, manual deletion of a required file by mistake, and so on. See “Problems related to recovery process” for the latter.
Node stopped due to memory error [Server]: Each platform error no.
[Description] Platform error (memory allocate)
[External tools] vmstat, top
[Command check] gs_stat : /performance/processMemory

First, check whether the memory upper limit (storeMemoryLimit) has been increased by too much relative to the physical memory (gs_stat). In addition, since the request process may also stagnate if the processMemory has been enlarged, check the total amount of memory secured by the communication message (/performance/memoryDetail/work.transactionMessageTotal) which is output by executing a gs_stat command with the --memoryDetail option appended to it.
Check "gs_stat" for the storeMemoryLimit and processMemory. storeMemoryLimit can be changed by editing gs_node.json and using "gs_paramconf".
(*1)
/cluster/ownerBackupLsnGap in the gs_cluster.json file: LSN threshold for determining backup error of the partition and promotion to the owner (master of partition)
In future, this parameter may be deleted or its name may be changed.
   Problems related to recovery processing
Causes Symptoms Countermeasures
Category 1 Category 2
Failure due to operating error Definition file settings do not match the contents in the database file. [Server]: 7036, 160012, 160013
[Description] (Number of partition groups discrepancy)
[Format] Number of partition groups in the definition file number, number of partition groups in the database file

Check the value of /dataStore/concurrency in gs_node.json.
When recovering from a backup, check that the settings match those at the time of the backup (/configInfo/groupNum in gs_backup_info.json).
[Server]: 68024, 160018
[Contents] (Number of partitions inconsistent)
[Format] Number of partitions in the definition file, number of partitions in the database file
Check the value of /dataStore/partitionNum in gs_cluster.json.
When recovering from a backup, check that the settings match those at the time of the backup (/configInfo/partitionNum in gs_backup_info.json).
[Server]: 68026, 68064
[Contents] (Block size inconsistent)
[Format] Block size of definition file, block size of database file
Check the value of dataStore/storeBlockSize in gs_cluster.json.
Check whether the value is the same as actualFileSize in the event log (block size of database file).
File required for recovery does not exist [Server]: 20004 to 20008
[Contents] (Necessary file check error)
[Format] Insufficient but confirmed file contents
Check whether all the files have been copied when starting from a backup ("gs_restore --test", etc.).
See the “GridStore backup guide” for details regarding backup.
Failure due to internal error A failure to continue has occurred during the recovery process [Server]: 20009
[Contents] (Internal error)
[Format] Based on tracing at the point the error is detected
Check with support services. If a backup exists, recovery using the data is also possible.
   Problems related to container operations
Causes Symptoms Countermeasures
Category 1 Category 2
Response common to all operations If a timeout has occurred Client failover timeout See “Problems related to client failover”.
If a connection failure has occurred Client failover timeout See “Problems related to client failover”.
Response common to all operations Failure caused by an object being invoked after it has been closed [Server]:
140036/145036/
140038/145038/
140040/145040
[Description] (Close-related error)
Implement the relevant operation before invoking the (container/resource) close process. Check the error message and respond accordingly.
If a non-conformance has occurred in a registered container [Server]: 60151
[Description] (A non-conformance has occurred in the container)
[Format] Partition ID, container name

Specified operation cannot be carried out. Check with support services.
A non-conforming status has been detected in the data in a specific container. Although service is continued, the container concerned cannot be updated subsequently. Search result may become invalid as well.
  [Server]: 10017
[Description] (Schema version of container is inconsistent)
[Format] Partition ID, container name, request schema version ID, current schema version ID
Get the container object again.
If the memory usage limit size is exceeded [Server]: 130033
[Description] (Memory upper limit exceeded)
[Format] Memory upper limit size
When raising the upper limit of the usable memory size,
add /dataStore/resultSetMemoryLimit or /transaction/totalMemoryLimit to the gs_node.json file, and make the value larger than the default value (*1).
If the memory size is exceeded during a search, the query can also be adjusted by restricting the number of hits so that the memory capacity used is reduced.
Failure to register container
GridStore::putCollection()
GridStore::putContainer()
GridStore::putTimeSeries()
If a container constraint violation occurred
[Error code]: 60015
[Description] (Container constraint violation)
[Format] Error description below
Container name size limit exceeded,
Column number limit exceeded,
Column name duplication error,
Row key specification error,
Row key data type support error,
Row key value constraint error,
Array data type support error,
Deadline release split value limit exceeded,
Error due to value lying outside deadline release range,
Configuration column limit exceeded in the thinning compression,
Affinity size limit exceeded,
etc.
Check the error message and correct the container data. See “Container control” under “Cluster operation control command interpreter (gs_sh)” in the “Operation control guide” for the container settings.
If a discrepancy has occurred in the schema of a registered container
[Error code]: 60149/60016
[Description] (Schema inconsistent with existing container)
[Format] Error description below
Discrepancy error in the row key definition,
Discrepancy error in the data type of identical column names,
Discrepancy error in all the column names,
Affinity value discrepancy error,
Discrepancy error in the deadline release setting,
Discrepancy error in the compression setting,
etc.
Check the error message and correct container data that are inconsistent. Container data can be checked by using the integrated operation control GUI (gs_admin) or the "showcontainer" command of "gs_sh".
Failure to change schema of container
GridStore::putCollection()
GridStore::putContainer()
GridStore::putTimeSeries()
If a container constraint violation occurred
[Error code]: 60015
[Description] (Container constraint violation)
[Format] Error description below
Container name size limit exceeded,
Column number limit exceeded,
Column name duplication error,
Row key specification error,
Row key data type support error,
Row key value constraint error,
Array data type support error,
Deadline release split value limit exceeded,
Error due to value lying outside deadline release range,
Configuration column limit exceeded in the thinning compression,
Affinity size limit exceeded,
etc.
Check the error message and correct the container data. See “Container control” under “Cluster operation control command interpreter (gs_sh)” in the “Operation control guide” for the container settings.
If a constraint violation of the schema to change with a registered container occurred
[Server]: 60149
[Description] (Container constraint violation)
[Format] Error description below
Discrepancy error in the row key definition,
Discrepancy error in the data type of identical column names,
Discrepancy error in all the column names,
Affinity value discrepancy error,
Discrepancy error in the deadline release setting,
Discrepancy error in the compression setting,
etc.
Check the error message and correct container data that are inconsistent. Container data can be checked by using the integrated operation control GUI (gs_admin) or the "showcontainer" command of "gs_sh".
If the parameter to permit the schema to be changed has not been set up [Server]: 60016
[Description] (No schema change permit)
Change in column layout of existing container is not permitted. To permit a change in the column layout, set the "modifiable" parameter to true.
Failure to get container
GridStore::getCollection()
GridStore::getContainer()
GridStore::getTimeSeries()
If different container types with the same name exist [Server]: 60026
[Description] (Container type is inconsistent)
Check if there are any mistakes in the container type of the specified container.
If the specified data type is not appropriate [Server]: 140023/145023
[Description] Failure caused by the specified data type not matching the existing column layout.
If the specified data type is not suitable as the data type of the row object
Check if there are any mistakes in the specified column layout. Column data can be checked by using the integrated operation control GUI (gs_admin) or the "showcontainer" command of "gs_sh".
Failure to delete container
GridStore::dropCollection()
GridStore::dropContainer()
GridStore::dropTimeSeries()
If different container types with the same name exist [Server]: 60026
[Description] (Container type is inconsistent)
Check if there are any mistakes in the container type of the specified container.
Failure to register index
Container::createIndex()
If a column with the corresponding name does not exist [Server]: 140008/145008
[Description] (Unknown column name)
Check if there are any mistakes in the specified column name. Column data can be checked by using the integrated operation control GUI (gs_admin) or the "showcontainer" command of "gs_sh".
If an unsupported column type is specified in the index settings [Server]: 1007
[Description] (Not supported)
Check if there are any mistakes in the specified column (column type). See “API list (Java)”, “createIndex” in the “API reference” for details about the index
Failure to delete index
Container::dropIndex()
If a column with the corresponding name does not exist [Server]: 140008/145008
(Unknown column name)
Check if there are any errors in the column name. Column data can be checked by using the integrated operation control GUI (gs_admin) or the "showcontainer" command of "gs_sh".
Failure to register trigger
Container::createTrigger()
If a trigger constraint violation occurred
[Server]: 140001/145001/170003/10040
[Description] (Trigger constraint violation)
[Format] Error description below
If the trigger name is null or blank
If the update operation subject to monitoring is not specified
If the notification destination URI does not conform to the stipulated syntax
If the JMS is specified by the trigger type, and the JMS destination type is null, or is blank, or does not conform to the specified format
If the JMS is specified by the trigger type, and the JMS destination name is null, or is blank
If this process times out, this container is deleted, a connection failure were to occur, or if it is invoked after being closed
Check the error message and correct the trigger data. See “Trigger function” in the “API reference” for details about the trigger function.
If the upper limit of the trigger is exceeded
[Server]: 1008
[Description] (Trigger upper limit violation)
[Format] Error description below
Limit value of trigger name exceeded
Prohibited characters in the trigger name
Limit value of URI length exceeded
Limit value of the number of trigger registrations exceeded
Check the error message and correct the trigger data. See “API list (Java)”, “createTrigger” in the “API reference” for details about the trigger function.
Failure to delete trigger
Container::dropTrigger()
None except common causes    
Failure to register or update row
Container::put()
TimeSeries::append()
If a key is specified even though no column corresponding to the row key exists [Server]: 140024/145024
[Description] (Key cannot be found)
Specified row key does not exist. Check the setting.
If there is a constraint violation in a column value [Server]: 60079
[Description] (Column constraint violation)
[Format] Error description below
Array length limit exceeded,
Limit of variable length data type such as string, space, BLOB, etc. exceeded
Time value outside of support range,
etc.
Check the error message and correct the column value. See “Data type” under “Overview” in “API reference” for details on the column value.
Failure to register or update rows all together
Container::multiput()
If the specified container does not exist [Server]: 10016
[Description] (Container cannot be found)
Specified container does not exist. Check the container name.
If the specified data type is not appropriate
[Server]: 60015
[Description] Failure caused by the specified data type of the row not matching the existing column layout.
If the specified data type of the row is not suitable as the data type of the row object
Check the column data and change it to a suitable column layout (column type).
If there is a constraint violation in a column value [Server]: 60079
[Description] (Column constraint violation)
[Format] Error description below
Array length limit exceeded,
Limit of variable length data type such as string, space, BLOB, etc. exceeded
Time value outside of support range,
etc.
Check the error message and correct the column value. See “Data type” under “Overview” in “API reference” for details on the column value.
Failure to update row via a RowSet
RowSet::update()
If row does not exist at target position [Server]: 140037/145037
[Description] (Specified destination of cursor does not exist)
Check cursor position of RowSet
If a RowSet acquired without enabling the lock is invoked [Server]: 140039/145039
[Description] (Not locked)
Lock is required
If there is a constraint violation in a column value [Server]: 60079
[Description] (Column constraint violation)
[Format] Error description below
Array length limit exceeded,
Limit of variable length data type such as string, space, BLOB, etc. exceeded
Time value outside of support range,
etc.
Check the error message and correct the column value. See “Data type” under “Overview” in “API reference” for details on the column value.
Failure to delete row
Container::remove()
If column corresponding to row key does not exist [Server]: 140024/145024
[Description] (Key cannot be found)
Specified row key does not exist. Check the setting.
If row deletion is specified for a container due for compression [Server]: 60086
[Description] (Operation invalid for compressed container)
A row cannot be deleted in a time series container for which compression has been specified.
Failure to delete row via a RowSet
RowSet::remove()
If row does not exist at target position [Server]: 140037/145037
[Description] (Specified destination of cursor does not exist)
Check cursor position of RowSet
If a RowSet acquired without enabling the lock is invoked [Server]: 140039/145039
[Description] (Not locked)
Lock is required
Failure to commit
Container::commit()
 If invoked despite being in the auto commit mode [Server]: 140035/145035
[Description] (Invalid commit mode)
If the logWriteMode value in gs_node.json is "DELAYED_SYNC", perform a log write at the specified interval. Set logWriteMode to "SYNC" when executing a commit explicitly.
Failure to abort
Container::abort()
 If invoked despite being in the auto commit mode [Server]: 140035/145035
[Description] (Invalid commit mode)
If the logWriteMode value in gs_node.json is "DELAYED_SYNC", perform a log write at the specified interval. Set logWriteMode to "SYNC" when executing an abort explicitly.
Row registration or update is not reflected
Container::put()
TimeSeries::append()
If a row update is registered for a container due for compression container, or a time earlier than the latest registration time is registered. [Trace]
[insert (old time)/update not support on Compression Mode]
Specifications in a time series container due for compression.
Only new rows with a newer time than existing rows with the latest time can be created for time series containers whose compression option has been set. If the time of an existing row having the latest time matches the specified time, the contents of the existing row will be retained without any changes being done.
Failure in search to get row
Query::get()
If column corresponding to row key does not exist [Server]: 140024/145024
[Description] (Key cannot be found)
Specified row key does not exist. Check the setting.
If there is a constraint violation in a key value [Server]: 70002
[Description] (Key constraint violation)
[Format] Error description below
String limit exceeded
Time value outside of support range
Check the error message and change the setting so that the key value does not deviate from the constraints of the data type. Set the value. See “Data type” under “Overview” in “API reference” for the value.
If a update lock request is attempted despite being in the auto commit mode [Server]: 140035/145035
[Description] (Invalid commit mode)
If the logWriteMode value in gs_node.json is "DELAYED_SYNC", perform a log write at the specified interval. Set logWriteMode to "SYNC" when requesting for an update lock.
Failure to search and acquire rows of multiple containers together
Query::multiGet()
If the specified container does not exist [Server]: 10016
[Description] (Container cannot be found)
Specified container does not exist. Check the setting.
If there is a constraint violation in a key value [Server]: 70002
[Description] (Key constraint violation)
[Format] Error description below
String limit exceeded
Time value outside of support range
Check the error message and change the setting so that the key value does not deviate from the constraints of the data type. Set the value. See “Data type” under “Overview” in “API reference” for the value.
Failure in linear complementation search of rows
TimeSeries::interpolate()
If column with corresponding name does not exist.
And, if column with unsupported data type is specified
[Server]: 140008/145008
[Description] (Unknown column name)
Cannot be implemented for containers in which the row key has not been set.
If there is a constraint violation in a key value [Server]: 70002
[Description] (Key constraint violation)
[Format] Error description below
String limit exceeded
Time value outside of support range
Check the error message and change the setting so that the key value does not deviate from the constraints of the data type. Set the value. See “Data type” under “Overview” in “API reference” for the value.
Failure to search for row samples
TimeSeries::query()
If there is a constraint violation in the unit of the sampling period [Server]: 60151
[Description] (Unit of period is out of the support range)
Unit set must not be YEAR, MONTH, or MILLISECOND.
If there is a constraint violation in the sampling interval [Server]: 70004
[Description] (Sampling interval constraint violation)
[Format] Error description below
Interval value is 0 or a negative value
Set a positive value (excluding 0).
If specified column name does not exist [Server]: 140008/145008
[Description] (Unknown column name)
Specified column does not exist. Check the setting.
If there is a constraint violation in a key value [Server]: 70002
[Description] (Key constraint violation)
[Format] Error description below
String limit exceeded
Time value outside of support range
Check the error message and change the setting so that the key value does not deviate from the constraints of the data type. Set the value. See “Data type” under “Overview” in “API reference” for the value.
Failure in the row consolidation function
TimeSeries::aggregate()
If an unauthorized column is specified in the specific operation method [Server]: 60100
[Description] (Unauthorized column operation)
Review the calculation formula and change the setting to a column that can be applied. See “Conditional syntax and calculation functions” in “API reference” for the conditions of each calculation.
If specified column name does not exist [Server]: 140008/145008
[Description] (Unknown column name)
Specified column does not exist. Check the setting.
(*1)
/dataStore/resultSetMemoryLimit in gs_node.json file: memory upper limit size of search result (ResultSet). Default value is 10240 MB. Unit is MB by default
/transaction/totalMemoryLimit in gs_node.json file: upper limit size of the empty memory maintained by the transaction process memory pool. Default value is 1024 MB. Unit is MB by default
In future, this parameter may be deleted or its name may be changed.
   TQL-related problems
Causes Symptoms Countermeasures
Category 1 Category 2
Failure due to a descriptive error common in the TQL descriptions If the specified column does not exist [Error code]: 150012
[Description] (Column cannot be found)
[Format] Error description below
Column cannot be found
Column ID cannot be found
Check the error message and then check the column name and column ID
Failure in interpretation of TQL [Error code]: 151001
[Description] (Syntax error)
Syntax error occurred. Check the TQL syntax. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
[Error code]: 151002
[Description] (Token is invalid)
Invalid key word is detected. Check the TQL syntax. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
[Error code]: 151003
[Description] (Not enclosed by double quotation marks)
[Format] Error description below
Cannot dequote
Double quotation mark is invalid. Check the TQL syntax. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
If data does not exist [Error code]: 152009
[Description] (Array index is out of range)
[Format] Error description below
Index specified in the array is out of range
Array index is invalid. Check the TQL syntax. See “Data type” in “API reference” for details on the array (complex type).

Failure due to descriptive error of FROM section
If the container name is incorrect [Error code]: 150010
[Description] (Container name is invalid)
[Format] Error description below
Container name given by the API is not the same as the FROM section
Specified container does not exist. Check the container name.
Failure due to descriptive error of WHERE section If there is a constraint violation resulting in failure in interpretation TQL [Error code]: 150013
[Description] (* is used in the WHERE condition)
[Format] Error description below
* cannot be used in the WHERE condition
* cannot be used in the WHERE condition. Check the TQL syntax. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
If a division by 0 is attempted [Error code]: 150016
[Description] (Division by 0)
[Format] Error description below
Division by 0
Division by 0 is not possible. Check the TQL syntax.
If there is a data type constraint violation [Error code]: 150018
[Description] (Binary process is not supported)
[Format] Error description below
Binary operation is not defined
Fellow binary operations with the specified data type cannot be executed. Review the TQL syntax.
[Error code]: 150019
[Description] (Constraint violation in column condition search)
[Format] Error description below
Column used in the WHERE condition is not a Boolean data type
If a column is used individually in the WHERE condition, the column data type needs to be Boolean. Review the TQL syntax.
[Error code]: 152010
[Description] (Constraint violation in spatial search)
[Format] Error description below
The column specified in the spatial range condition is not a spatial data type
Use a spatial data type column to specify the condition in searching the spatial range. Check the TQL syntax.
Failure due to descriptive error of the ORDER BY section Failure caused by the alignment method expression [Error code]: 151004
[Description] (Constraint violation of alignment method in ORDER BY section)
[Format] Error description below
Only column names are permitted in the ORDER BY section
Check the alignment method specified in the ORDER BY section. See “Sorting of search results (ORDER BY)” under “TQL syntax and operation functions” in the “API references” for the ORDER BY section.
Failure that violates operating conditions [Error code]: 151005
[Description] (Constraint violation of operating conditions in ORDER BY section)
[Format] Error description below
Consolidation function and ORDER BY are being used simultaneously
Consolidation function and ORDER BY cannot be used simultaneously. Check the TQL syntax. See “Sorting of search results (ORDER BY)” under “TQL syntax and operation functions” in the “API references” for the ORDER BY section.
Failure due to descriptive error of the function Failure caused by function name [Error code]: 150014
[Description] (Function cannot be found)
[Format] Error description below
No such function
Check the function name. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
If the operating conditions of the function are violated [Error code]: 152003/152004
[Description] (Unauthorized use of special function for collection/time series containers)
[Format] Error description below
Selection function for collection was used on a time series container
Selection function for time series container was used on a collection
Consolidation function for time series container was used on a collection
Check whether a special function for time series containers has been used on a collection. Or, check whether a special function for collections has been used on a time series container. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
Failure caused by the number of arguments [Error code]: 152002/152007/152011
[Description] (Number of arguments is invalid)
[Format] Error description below
Number of arguments in function is invalid
Argument expected to be empty is not empty in the function
Check the argument of the function. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
Failure caused by the data type of the argument [Error code]: 152002/152007/152010/152020
[Description] (Data type of argument is invalid)
[Format] Error description below
Data type of argument in function is invalid
Interpolation target is not a column
Interpolation was attempted on a column which is not a numerical value
An argument which is not a column has been used
Check the data type given in the argument of the function. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
Failure caused by argument value
[Error code]: 150005/152010/152012
[Description] (Value of argument is invalid)
[Format] Error description below
Data outside the time range has been used
Coordinate value is invalid
Escape text is not a single character
A natural number was expected but a value less than 0 was used
String cannot be converted to time format
Data given in argument of function is incorrect. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
Failure due to descriptive error of the spatial data Failure caused by the number of arguments [Error code]: 152007/152010
[Description] (Number of arguments is invalid)
[Format] Error description below
Argument is required but space is blank
Attempt to create a line with a single point
Check the argument of the spatial data expression. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
Failure caused by the data type of the argument [Error code]: 152010
[Description] (Data type of argument is invalid)
[Format] Error description below
Data type of argument is invalid
Check the type of argument of the spatial data expression. See “TQL syntax and operation functions” in “API reference” for details on the TQL syntax.
Failure to execute TQL If memory could not be secured [Error code]: 1001
[Description] (Error in securing memory)
[Format] Error description below
Memory of TQL parser cannot be secured
Memory of WKT parser cannot be secured

When raising the upper limit of the usable memory size,
add /dataStore/resultSetMemoryLimit or /transaction/totalMemoryLimit to the gs_node.json file, and make the value larger than the default value (*1).
If the memory size is exceeded during a search, the query can also be adjusted by restricting the number of hits so that the memory capacity used is reduced.
(*1)
/dataStore/resultSetMemoryLimit in gs_node.json file: memory upper limit size of search result (ResultSet). Default value is 10240 MB. Unit is MB by default
/transaction/totalMemoryLimit in gs_node.json file: upper limit size of the empty memory maintained by the transaction process memory pool. Default value is 1024 MB. Unit is MB by default
In future, this parameter may be deleted or its name may be changed.
Reference - Compatibility Client version and database version corresponding to the server version are as follows.
Server version Client Database
Internal version Client version Internal version Server version when DB was created
v 1.0 1 v 1.0/1.1 1 v 1.0
v 1.1 1 v 1.0/1.1 1, 2 v 1.0/1.1
v 1.5 2 v 1.5 1, 2, 3 v 1.0/1.1/1.5
v 2.0 3 v 2.0 4 v 2.0
v 2.1 4 v 2.1 5 v 2.1
v 2.5 5 v 2.5 5 v 2.1/2.5
v 2.7 5, 6 v 2.5/2.7 5, 6 v 2.1/2.5/2.7
<Memo> Internal version is the version used for internal processing displayed in the error message.