
Blog

GridDB IoT Hackathon Recap (Part 1 of 2): The Online Idea Phase
Hackathon Gallery Introduction Over the last few months of 2025, GridDB held what is known as a hackathon, to really highlight the versatility and productiveness

Hackathon Gallery Introduction Over the last few months of 2025, GridDB held what is known as a hackathon, to really highlight the versatility and productiveness of GridDB Cloud. The prompt of the event was simple: use the power of GridDB Cloud to build any sort of app you want; the webpage for the event made mentions of IoT, but really the prompt was open and users could submit ideas based on any personal interests or expertise. The event (officially titled as the GridDB IoT Hackathon) had two distinct phases: the aforementioned online phase, where teams of 2-5 could submit their ideas with no coding necessary, just a basic blueprint of how they would plan to implement their idea. The next phase of the event would be an all-out in-person event hosted in Bengaluru, India for the top 5 teams, decided by the judging panel. And as exciting as the in-person event was, we will save that portion for another day. For today’s article, we will focus on the online portion of the event. And now, for some numbers. We had over 250 participants sign up for the event through the online portal. From there, we had 28 teams submit their ideas. We had ideas ranging from health, to finance, to on-the-field sensors for a variety of different purposes. All-in-all, we were very impressed and flattered at the breadth and range of project ideas submitted by the wonderful GridDB community. Due to technical issues, we lost access to the original hackathon portal, but we have re-created the gallery for all to see here: Hackathon Gallery. Please be on the lookout for the next article where we will showcase the 5 submissions which graduated on to the finalist round where we hosted an event in Bengaluru,

Introduction As global populations grow, agriculture faces mounting pressure to produce more food sustainably. Precision agriculture, powered by IoT and real-time data analytics, offers a solution by optimizing crop management through actionable insights. However, traditional relational databases struggle with the velocity and volume of agricultural time-series data, creating performance bottlenecks when farmers need immediate analysis of crop conditions, weather patterns, and soil metrics. GridDB’s specialized time-series architecture addresses these challenges through efficient storage, high-speed ingestion, and optimized query performance for temporal data patterns. Its ability to handle mixed-frequency sensor data—from hourly weather readings to daily satellite imagery—makes it particularly well-suited for agricultural monitoring systems. This article explores a simple Spring Boot application that leverages GridDB Cloud to monitor crop health predicatively. Our implementation integrates real-time weather data from NASA’s POWER API, stores environmental time-series in GridDB, and exposes analytics through REST APIs for dashboard visualization. Time-Series Database Requirements for Agricultural IoT An ideal time-series database platform must efficiently ingest, store, and analyze data from diverse sources to enable data-driven decisions in crop health monitoring, irrigation management, and yield prediction. Meeting these demands involves addressing several critical data characteristics and infrastructure requirements, such as: High Cardinality Numerous unique time series generated by varied sensor types, farm locations, and devices. Multi-Frequency Data Streams Environmental sensors: Updates every 15 minutes. Weather APIs: Typically provide hourly updates. Satellite imagery: Collected daily or at intervals of 1 to 7 days. Sensor Metrics Each sensor point typically captures four distinct metrics (e.g., temperature, humidity, soil moisture, and light intensity). Ingestion Volume Per farm: 100 sensors × 4 metrics × 96 readings/day = ~38,400 records daily. Across 500+ farms: 500 farms × ~38.4K records = ~19 million inserts per day. Timestamp Precision Data is recorded with microsecond-level precision to support fine-grained temporal analysis. Query and Storage Requirements Optimized for frequent time-based queries (e.g., daily or weekly trends). Requires efficient data compression and long-term retention strategies to manage continuous high-volume ingestion. These patterns demand a robust time-series database like GridDB, designed to handle high-ingest rates, granular timestamps, and complex queries, all essential for scalable agricultural IoT solutions. Project Overview: Building the Predictive Crop Health Monitoring Farmers In this project, we focus on real-time crop health monitoring by processing environmental data and displaying important insights through visual dashboards, built with Spring Boot, GridDB Cloud, Thymeleaf, and Chart.js. System Workflow Collection Fetches environmental metrics from NASA’s POWER API. Storage Stores data in GridDB using a time-series model (CropHealthData container). Analysis Processes raw data into actionable stress indicators using CropHealthService Visualization Displays daily, weekly, and monthly trends on a web-based dashboard. Setting Up GridDB Cluster and Spring Boot Integration Project Structure Here’s a suggested project structure for this application: ├── pom.xml ├── src │ ├── main │ │ ├── java │ │ │ └── mycode │ │ │ ├── controller │ │ │ │ └── CropHealthController.java │ │ │ ├── dto │ │ │ │ └── CropHealthData.java │ │ │ ├── MySpringBootApplication.java │ │ │ └── service │ │ │ ├── CollectionService.java │ │ │ ├── CropHealthService.java │ │ │ └── RestTemplateConfig.java │ │ └── resources │ │ ├── application.properties │ │ └── templates │ │ └── dashboard.html This structure separates controllers, models, repositories, services, and the application entry point into distinct layers, enhancing modularity and maintainability. It can be further customized based on individual requirements. Set Up GridDB Cloud For this exercise, we will be using GridDB Cloud vesion. Start by visiting the GridDB Cloud portal and signing up for an account. Based on requirements, either the free plan or a paid plan can be selected for broader access. After registration, an email will be sent containing essential details, including the Web API URL and login credentials. Once the login details are received, log in to the Management GUI to access the cloud instance. Create Database Credentials Before interacting with the database, we must create a database user: Navigate to Security Settings: In the Management GUI, go to the “GridDB Users” tab. Create a Database User: Click “Create Database User,” enter a username and password, and save the credentials. For example, set the username as soccer_admin and a strong password. Store Credentials Securely: These will be used in your application to authenticate with GridDB Cloud. Set Allowed IP Addresses To restrict access to authorized sources, configure the allowed IP settings: Navigate to Security Settings: In the Management GUI, go to the “Network Access” tab and locate the “Allowed IP” section and add the . Add IP Addresses: For development, you can temporarily add your local machine’s IP. Add POM Dependency Here’s an example of how to configure the dependency in the pom.xml file: <?xml version=”1.0″ encoding=”UTF-8″?> <project xmlns=”http://maven.apache.org/POM/4.0.0″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd”> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>my-griddb-app</artifactId> <version>1.0-SNAPSHOT</version> <name>my-griddb-app</name> <description>GridDB Application with Spring Boot</description> <url>http://maven.apache.org</url> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.4</version> <relativePath /> <!– lookup parent from repository –> </parent> <properties> <maven.compiler.source>17</maven.compiler.source> <maven.compiler.target>17</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <jackson.version>2.16.1</jackson.version> <lombok.version>1.18.30</lombok.version> <springdoc.version>2.3.0</springdoc.version> <jersey.version>3.1.3</jersey.version> <httpclient.version>4.5.14</httpclient.version> </properties> <dependencies> <!– Spring Boot Starters –> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <!– Testing –> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <!– API Documentation –> <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>${springdoc.version}</version> </dependency> <!– JSON Processing –> <dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>${jersey.version}</version> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20231013</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>${jackson.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>${jackson.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>${jackson.version}</version> </dependency> <!– HTTP Client –> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>${httpclient.version}</version> </dependency> <!– Development Tools –> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> <optional>true</optional> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.11.0</version> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> <annotationProcessorPaths> <path> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> </path> </annotationProcessorPaths> </configuration> </plugin> </plugins> </build> </project> The application.properties file stores configuration settings like the GridDB Cloud API URL and key, and the NASA POWER API URL, enabling the app to connect securely to these external services. griddb.rest.url=https://your-griddb-cloud-url/rest griddb.api.key=your-griddb-api-key nasa.power.api.url=https://power.larc.nasa.gov/api/temporal/daily/point Technical Implementation In the following section, we’ll walk through the key steps required to set up the project. Container Setup in GridDB Cloud A container named CropHealthData is created in GridDB Cloud, defined as a time-series type, with timestamp set as the row key. Next, we define the schema, which includes the following columns: Data Collection: CollectionService The CollectionService handles weather data ingestion by acting as the interface between external data sources and the GridDB backend. It integrates with NASA’s POWER API to retrieve daily environmental metrics crucial for monitoring crop health. Weather Metrics In this section, we retrieve high-precision, real-time data from an external API. The service provides access to various environmental parameters through the following endpoint: https://power.larc.nasa.gov/api/temporal/daily/pointparameters=ALLSKY_SFC_SW_DWN,T2MDEW,WS2M,PS,QV2M&community=AG&longitude=-93.5&latitude=42.0&start=%s&end=%s&format=JSON API Reference: NASA POWER API Documentation Data is retrieved for a fixed geographical location (latitude: 42.0, longitude: -93.5) via a GET request to NASA’s temporal endpoint. The parameters fetched include: ALLSKY_SFC_SW_DWN: Solar Radiation (MJ/m²/day) T2MDEW: Dew Point (°C) WS2M: Wind Speed (m/s) PS: Surface Pressure (kPa) QV2M: Specific Humidity (g/kg) After receiving the JSON response: Relevant fields are extracted. Timestamps are formatted to GridDB’s required pattern: yyyy-MM-dd HH:mm:ss. Invalid or missing values (represented as -999) are filtered out to ensure data quality. Here is the implementation of CollectionService.java file. @Service public class CollectionService { @Value(“${nasa.power.api.url}”) private String nasaApiUrl; private static String gridDBRestUrl; private static String gridDBApiKey; @Value(“${griddb.rest.url}”) public void setGridDBRestUrl(String in) { gridDBRestUrl = in; } @Value(“${griddb.api.key}”) public void setGridDBApiKey(String in) { gridDBApiKey = in; } public void fetchAndStoreData(String startDate, String endDate) { try { // Fetch JSON data from NASA POWER API String jsonData = fetchJSONFromNASA(String.format( “%s?parameters=ALLSKY_SFC_SW_DWN,T2MDEW,WS2M,PS,QV2M&community=AG&longitude=-93.5&latitude=42.0&start=%s&end=%s&format=JSON”, nasaApiUrl, “20250101”, “20250514”)); // Process and send data to GridDB Cloud sendBatchToGridDB(jsonData); } catch (Exception e) { throw new RuntimeException(“Failed to fetch and store data”, e); } } private String fetchJSONFromNASA(String urlString) throws Exception { URL url = new URL(urlString); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod(“GET”); conn.setRequestProperty(“Accept”, “application/json”); if (conn.getResponseCode() != 200) { throw new RuntimeException(“Failed to fetch data: HTTP error code : ” + conn.getResponseCode()); } Scanner scanner = new Scanner(url.openStream()); StringBuilder response = new StringBuilder(); while (scanner.hasNext()) { response.append(scanner.nextLine()); } scanner.close(); conn.disconnect(); return response.toString(); } private void sendBatchToGridDB(String jsonData) throws Exception { JSONArray batchData = new JSONArray(); ObjectMapper mapper = new ObjectMapper(); JsonNode root = mapper.readTree(jsonData); JsonNode data = root.path(“properties”).path(“parameter”); JsonNode allSkyNode = data.path(“ALLSKY_SFC_SW_DWN”); // Iterate over the field names (dates) in ALLSKY_SFC_SW_DWN Iterator<String> dateIterator = allSkyNode.fieldNames(); while (dateIterator.hasNext()) { String dateStr = dateIterator.next(); double solarRadiation = allSkyNode.path(dateStr).asDouble(); double dewPoint = data.path(“T2MDEW”).path(dateStr).asDouble(); double windSpeed = data.path(“WS2M”).path(dateStr).asDouble(); double surfacePressure = data.path(“PS”).path(dateStr).asDouble(); double specificHumidity = data.path(“QV2M”).path(dateStr).asDouble(); // Skip records with -999 (missing data) if (solarRadiation == -999 || dewPoint == -999 || windSpeed == -999 || surfacePressure == -999 || specificHumidity == -999) { continue; } JSONArray rowArray = new JSONArray(); rowArray.put(formatTimestamp(dateStr)); rowArray.put(42.0); // latitude rowArray.put(-93.5); // longitude rowArray.put(solarRadiation); rowArray.put(dewPoint); rowArray.put(windSpeed); rowArray.put(surfacePressure); rowArray.put(specificHumidity); batchData.put(rowArray); } if (batchData.length() > 0) { sendPutRequest(batchData); } else { System.out.println(“No valid data to send to GridDB.”); } } private String formatTimestamp(String inputTimestamp) { try { if (inputTimestamp == null || inputTimestamp.isEmpty()) { return LocalDateTime.now().format(DateTimeFormatter.ofPattern(“yyyy-MM-dd’T’HH:mm:ss”)) + “Z”; } SimpleDateFormat sdf = new SimpleDateFormat(“yyyyMMdd”); SimpleDateFormat outputFormat = new SimpleDateFormat(“yyyy-MM-dd’T’HH:mm:ss’Z'”); return outputFormat.format(sdf.parse(inputTimestamp)); } catch (Exception e) { return LocalDateTime.now().format(DateTimeFormatter.ofPattern(“yyyy-MM-dd’T’HH:mm:ss”)) + “Z”; } } private void sendPutRequest(JSONArray batchData) throws Exception { URL url = new URL(gridDBRestUrl); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setDoOutput(true); conn.setRequestMethod(“PUT”); conn.setRequestProperty(“Content-Type”, “application/json”); conn.setRequestProperty(“Authorization”, gridDBApiKey); // Send JSON Data try (var os = conn.getOutputStream()) { os.write(batchData.toString().getBytes()); os.flush(); } int responseCode = conn.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK || responseCode == HttpURLConnection.HTTP_CREATED) { System.out.println(“Batch inserted successfully.”); } else { throw new RuntimeException(“Failed to insert batch. Response: ” + responseCode); } conn.disconnect(); } } Batch Insertion to GridDB Next, cleaned data is sent to the CropHealthData container in GridDB using a PUT request via the /CropHealthData/row endpoint. This process: Utilizes batch insertion to reduce network overhead. Takes advantage of GridDB’s high-throughput performance, which is ideal for time-series ingestion at scale. Reference: GridDB Performance Overview Data Retrieval and Analytical Processing The CropHealthService is responsible for retrieving data from GridDB and performing analytical computations to derive actionable indicators. It queries the CropHealthData container using GridDB’s REST API, which returns a JSON response with a rows array. Each row contains eight fields, with timestamps formatted as yyyy-MM-dd’T’HH:mm:ss.SSSSSSSSSZ (e.g., 2025-01-01T00:00:00.000000000Z). The service parses this response, mapping each row to a CropHealthData DTO object. package mycode.dto; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import java.util.Date; @Data @AllArgsConstructor @NoArgsConstructor public class CropHealthData { private Date timestamp; private double latitude; private double longitude; private double solarRadiation; private double dewPoint; private double windSpeed; private double surfacePressure; private double specificHumidity; private String status; } This object is then used to compute five key indicators: Photosynthetic Stress: Identifies days with solar radiation below 10 MJ/m²/day, aggregated by month to highlight periods of reduced photosynthesis. Fungal Risk: Counts days with dew point above 20°C within a 7-day window, signaling potential fungal disease outbreaks. Wind Stress: Tracks days with wind speed exceeding 5 m/s, aggregated weekly to assess mechanical stress on crops. Atmospheric Stress: Detects daily surface pressure drops greater than 2 kPa, indicating weather instability. Moisture Deficit: Calculates weekly average specific humidity to evaluate water availability. These indicators transform raw weather data into insights tailored for agriculture. For example, a high fungal risk score prompts farmers to apply fungicides, while persistent low radiation signals the need for supplemental lighting. Visualization: Unified Dashboard API Visualization is a key feature of the system, delivered through a Thymeleaf-based dashboard powered by Chart.js for dynamic, interactive charts. The CropHealthController exposes a single API endpoint: GET /api/dashboard-data This endpoint calls CropHealthService.getAllVisualizationData to retrieve data for all ten charts in one JSON response. Here is the complete implementation from CropHealthController.java file. @Controller @RequestMapping(“/crop-health”) public class CropHealthController { @Autowired private CropHealthService cropHealthService; @GetMapping(“/dashboard”) public String dashboard(Model model) { model.addAttribute(“solarRadiationData”, cropHealthService.getVisualizationData(“solar_radiation”)); model.addAttribute(“dewPointData”, cropHealthService.getVisualizationData(“dew_point”)); model.addAttribute(“windSpeedData”, cropHealthService.getVisualizationData(“wind_speed”)); model.addAttribute(“surfacePressureData”, cropHealthService.getVisualizationData(“surface_pressure”)); model.addAttribute(“specificHumidityData”, cropHealthService.getVisualizationData(“specific_humidity”)); model.addAttribute(“photosyntheticStressData”, cropHealthService.getVisualizationData(“photosynthetic_stress”)); model.addAttribute(“fungalRiskData”, cropHealthService.getVisualizationData(“fungal_risk”)); model.addAttribute(“windStressData”, cropHealthService.getVisualizationData(“wind_stress”)); model.addAttribute(“atmosphericStressData”, cropHealthService.getVisualizationData(“atmospheric_stress”)); model.addAttribute(“moistureDeficitData”, cropHealthService.getVisualizationData(“moisture_deficit”)); return “dashboard”; } } Running the Project To run the project, execute the following command to build and run our application: mvn clean install && mvn spring-boot:run Accessing the Dashboard After successfully launching the Spring Boot application, users can access the interactive visualization dashboard by opening a web browser and navigating to: `http://localhost:9090/dashboard`. The dashboard currently showcases the following visual insights: The dashboard presents a comprehensive view of crop health through ten interactive charts: Environmental Metrics (Line Charts) Solar Radiation: Daily solar radiation values in MJ/m²/day, helping identify optimal photosynthesis periods Dew Point: Temperature at which air becomes saturated (°C), crucial for fungal disease prediction Wind Speed: Daily wind measurements in m/s, indicating potential mechanical stress on crops Surface Pressure: Atmospheric pressure readings in kPa, showing weather stability Specific Humidity: Daily moisture content in g/kg, essential for irrigation planning Crop Stress Indicators Photosynthetic Stress: Monthly bar chart showing days with suboptimal radiation (< 10 MJ/m²/day) Fungal Risk: Gauge chart displaying 7-day dew point risk assessment (> 20°C) Wind Stress: Weekly bar chart tracking high wind events (> 5 m/s) Atmospheric Stress: Area chart highlighting significant pressure drops (> 2 kPa/day) Moisture Deficit: Weekly line chart of humidity averages for water management Conclusion: Precision farming relies on fast, accurate time-series data to make informed decisions that improve crop health and yield. By harnessing GridDB’s ability to handle large-scale, high-frequency environmental data in real time, farmers can detect stress factors early and respond proactively. This timely insight reduces waste, optimizes resource use, and ultimately leads to more sustainable and efficient agricultural practices. Fast, reliable time-series databases are essential for unlocking the full potential of precision agriculture in today’s data-driven

With GridDB Cloud 3.1, you can now access the native API of GridDB through Azure’s virtual peering network connection. The way it works is that that any virtual network (vnet) that you set up in your Azure cloud environment can set up what is called a peering connection, which allows two disparate sources to communicate through Azure’s vast resources. Through this, any virtual machine connected to that vnet can communicate and use the GridDB Cloud native APIs. We discuss at greater length here: https://griddb.net/en/blog/griddb-cloud-v3-1-how-to-use-the-native-apis-with-azures-vnet-peering/ In this article, we will build upon that idea and teach you how to set up a VPN which will allow you to access your GridDB Cloud through your local environment, meaning you can freely use GridDB with your existing application code as long as you connect to the VPN. Prereqs To fully utilize GridDB Cloud with native APIs in your local environment, you will need to, of course, have access to one of the paid GridDB Cloud instances: https://griddb.net/en/blog/griddb-cloud-azure-marketplace/. The nice thing, though, is that there are trial versions on the marketplace of one month so that you may try out GridDB Cloud’s features for free! You will also need to have set up a the vnet peering as described in the opening paragraphs of this article: GridDB Cloud v3.1 – How to Use the Native APIs with Azure’s VNET Peering If you have this set up, you should have the following in your Azure resource: GridDB Cloud (Pay As You Go) Azure Virtual Network with peering connection to GridDB Cloud A virtual machine connected to the above vnet Please note, that all of the above will incur some sort of cost on Azure (for example, an Azure VM b1 instance costs roughly ~$8/month if left on at all times). OpenVPN and IP Masquerading The way this set up works is through something called IP Masquerading which is “a process where one computer acts as an IP gateway for a network. All computers on the network send their IP packets through the gateway, which replaces the source IP address with its own address and then forwards it to the internet.” (https://www.linux.com/training-tutorials/what-ip-masquerading-and-when-it-use/). Essentially, it means that the traffic from your local machine will be intended for the GridDB Cloud IP, but instead will route through the VPN the DB will see the request coming and it will look like the request is coming from the local machine within the network (the vm) and accept it, and then make its response and push it back through the virtual network, through the virtual machine, and to your local env. So to get this running, you simply need to set up openvpn on the Azure virtual machine and then turn on the rule to do IP Masquerading and it will work. Install OpenVPN To install openvpn and the client certs for my machine, I used the guide from ubuntu: https://documentation.ubuntu.com/server/how-to/security/install-openvpn/. Through this guide, you will have OpenVPN installed on your Azure VM and then will have certs on your local machine that can connect to your VM. 1. Install OpenVPN & Easy-RSA sudo apt install openvpn easy-rsa 2. Set Up the PKI (Certificate Authority) sudo make-cadir /etc/openvpn/easy-rsa cd /etc/openvpn/easy-rsa/ Initialize PKI: ./easyrsa init-pki Build the CA: ./easyrsa build-ca 3. Generate Server Certificates Generate server key request: ./easyrsa gen-req myservername nopass Generate Diffie-Hellman params: ./easyrsa gen-dh Sign server certificate: ./easyrsa sign-req server myservername Copy required files into /etc/openvpn/: pki/dh.pem pki/ca.crt pki/issued/myservername.crt pki/private/myservername.key 4. Create Client Certificates Generate client key request: ./easyrsa gen-req myclient1 nopass Sign client cert: ./easyrsa sign-req client myclient1 Securely copy to the client machine: ca.crt (from earlier) myclient1.crt (inside /pki/issued) myclient1.key (inside /pki/private) 5. Configure the OpenVPN Server Copy sample config: sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf /etc/openvpn/myserver.conf Edit myserver.conf so these lines reference your certs: ca ca.crt cert myservername.crt key myservername.key dh dh.pem Generate TLS auth key: sudo openvpn –genkey secret ta.key Enable IP forwarding: Edit /etc/sysctl.conf, set: net.ipv4.ip_forward=1 Apply: sudo sysctl -p /etc/sysctl.conf Start the server: sudo systemctl start openvpn@myserver 6. Configure the Client Install OpenVPN: sudo apt install openvpn Copy sample config: sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/ Place the files on client: ca.crt myclient1.crt myclient1.key ta.key Edit client.conf: client remote your.server.ip 1194 ca ca.crt cert myclient1.crt key myclient1.key tls-auth ta.key 1 Start client: sudo systemctl start openvpn@client 7. Quick Troubleshooting Check logs: sudo journalctl -u openvpn@myserver -xe sudo journalctl -u openvpn@client -xe Ensure: Ports match Protocol (udp/tcp) matches tls-auth index matches (0 on server, 1 on client) Same cipher, auth, and dev tun settings IP Masquerading As explained above, if you try it now, it simply won’t work, as the traffic will be routed to the GridDB DB from the IP on your local environment which is blocked due to security rules. But once this setting is turned on, it will work. Run the following command in your VM: sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE. And that should do it! To ensure it works, you can of course run the sample code based on the previous blog. But before going through that effort, you can also simply try this: from the local environment (connected to the VPN), ping the IP of your GridDB Cloud DB (can be fetched from the notification provider URL in the GridDB Cloud DB UI home page) ping 172.26.30.68. And then on your Azure VM (the one hosting the VPN and that can also connect to GridDB Cloud) run: sudo tcpdump -i eth0 -n host 172.26.30.68. If successful, your pings to GridDB Cloud should be routed through the VM and be heading to its destination. Cool! To run the sample code, you can start by cloning the github repo and changing to the correct branch: $ git clone https://github.com/griddbnet/Blogs.git –branch griddb_cloud_paid_guide Then set your env variables for your GridDB Connection: export GRIDDB_NOTIFICATION_PROVIDER=”” export GRIDDB_CLUSTER_NAME=”” export GRIDDB_USERNAME=”” export GRIDDB_PASSWORD=”” export GRIDDB_DATABASE=”” And then from here, navigate to either the java or python dirs and run them! For java, do – $ mvn clean package – $ java -jar target/java-samples-1.0-SNAPSHOT-jar-with-dependencies.jar For python, after installing the python client, you can install the requirements text (python3.12 -m pip install -r requirements.txt), make sure your JAVA_HOME and CLASSPATH env variables are set, and then run the code python3.12

Welcome! We’re about to build something useful, a volunteer-matching platform that connects skilled medical professionals with health organizations that need them. It’s the kind of system you’d see powering real health events, from blood drives to vaccination clinics. By the time we’re done, you’ll understand how to architect and deploy a complete full-stack application that handles real-world complexity, including matching qualified people to opportunities, managing permissions across different user roles, and keeping everything secure. The Stack: Technologies That Work Together We’re using a carefully selected tech stack that mirrors what you’ll find in production environments: Spring Boot & Thymeleaf handles the business rules, data orchestration, and renders dynamic HTML templates on the server side. GridDB (cloud-hosted NoSQL datastore) stores volunteer profiles, opportunities, and applications. Each technology serves a specific purpose, and together they create a seamless user experience backed by robust backend logic. Learning Roadmap We’ll move from foundation to mastery: Setup & Architecture: We’ll start by understanding the three-layer system design, laying out your Maven project structure, and configuring Spring Boot for success. Core Features: Next, we’ll implement the data model (entities, relationships, indexing) and set up GridDB integration. User Interface & Experience: Then we’ll create server-rendered Thymeleaf templates for browsing opportunities, applying for roles, and managing skills. You’ll see how server-side rendering keeps everything simple. Security: We’ll add Spring Security authentication, implement role-based access control, ensuring organizers see different screens than volunteers, and ensuring data stays protected. Real-World Patterns: Finally, integrate real-time slot updates. By completing this tutorial, you’ll understand how to architect a full-stack Java application from database to user interface. More importantly, you’ll have a complete, deployable system you can adapt to other matching problems. Let’s build something real. Project Setup Here’s how we’ll set it up: Navigate to start.spring.io Configure your project: Project: Maven Language: Java Spring Boot: 3.5.x (latest stable version) Group: com.example Artifact: springboot-volunteermatching Java Version: 21 Add the following dependencies: Spring Web Thymeleaf Spring Security Click Generate to download a ZIP file with our project structure Once you’ve downloaded and extracted the project, import it into your IDE. Next, we will create the package structure by grouping the classes based on their respective entities, e.g., a package organization contains the controller, service, DTO, etc. volunteer-matching/ ├── pom.xml ├── src/main/java/com/volunteermatching/ │ ├── config/ (RestClient config) │ ├── griddb/ │ ├── griddbwebapi/ │ ├── opportunity/ │ ├── opportunity_requirement/ │ ├── organization/ │ ├── organization_member/ │ ├── registration/ │ └── security/ (Auth filters, RBAC) │ ├── skill/ │ ├── user/ │ ├── volunteer_skill/ └── src/main/resources/ ├── templates/ (Thymeleaf templates) └── application.properties (Configuration) Connecting to the GridDB Cloud Configure the credentials for connecting to the GridDB Cloud through HTTP. Add the following to application.properties: # GridDB Configuration griddbcloud.base-url=https://cloud5197.griddb.com:443/griddb/v2/gs_cluster griddbcloud.auth-token=TTAxxxxxxx Next, create a bean of org.springframework.web.client.RestClient that provides a fluent, builder-based API for sending synchronous and asynchronous HTTP requests with cleaner syntax and improved readability. @Configuration public class RestClientConfig { final Logger LOGGER = LoggerFactory.getLogger(RestClientConfig.class); @Bean(“GridDbRestClient”) public RestClient gridDbRestClient( @NonNull @Value(“${griddbcloud.base-url}”) final String baseUrl, @NonNull @Value(“${griddbcloud.auth-token}”) final String authToken) { return RestClient.builder() .baseUrl(baseUrl) .defaultHeader(HttpHeaders.AUTHORIZATION, “Basic ” + authToken) .defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE) .defaultHeader(HttpHeaders.ACCEPT, MediaType.APPLICATION_JSON_VALUE) .defaultStatusHandler( status -> status.is4xxClientError() || status.is5xxServerError(), (request, response) -> { String responseBody = getResponseBody(response); LOGGER.error(“GridDB API error: status={} body={}”, response.getStatusCode(), responseBody); if (response.getStatusCode().value() == 403) { LOGGER.error(“Access forbidden – please check your auth token and permissions.”); throw new ForbiddenGridDbConnectionException(“Access forbidden to GridDB Cloud API.”); } throw new GridDbException(“GridDB API error: “, response.getStatusCode(), responseBody); }) .requestInterceptor((request, body, execution) -> { final long begin = System.currentTimeMillis(); ClientHttpResponse response = execution.execute(request, body); logDuration(request, body, begin, response); return response; }) .build(); } } @Bean(“GridDbRestClient”): register this client as a Spring bean so we can inject it anywhere with @Qualifier(“GridDbRestClient”) final RestClient restClient. .baseUrl(baseUrl): set the common base URL for all requests. .defaultHeader(…): adds a header that will be sent with every request. .defaultStatusHandler(…): when the API return an error (4xx or 5xx status code), log the error status. If the status is 403, throws a custom ForbiddenGridDbConnectionException. For any other error, it throws a general GridDbException. .requestInterceptor(: log how long the request took for debugging performance. Next, create a helper that will be used by each service class to talk to the GridDB Cloud over the internet using HTTP requests. It wraps a pre-configured RestClient and provides easy-to-use methods for common database operations. All the complicated stuff (URLs, headers, error handling) is hidden inside this class. @Component public class GridDbClient { private final RestClient restClient; public GridDbClient(@Qualifier(“GridDbRestClient”) final RestClient restClient) { this.restClient = restClient; } public void createContainer(final GridDbContainerDefinition containerDefinition) { try { restClient .post() .uri(“/containers”) .body(containerDefinition) .retrieve() .toBodilessEntity(); } catch (Exception e) { throw new GridDbException(“Failed to create container”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public void registerRows(String containerName, Object body) { try { ResponseEntity result = restClient .put() .uri(“/containers/” + containerName + “/rows”) .body(body) .retrieve() .toEntity(String.class); } catch (Exception e) { throw new GridDbException(“Failed to execute PUT request”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public AcquireRowsResponse acquireRows(String containerName, AcquireRowsRequest requestBody) { try { ResponseEntity responseEntity = restClient .post() .uri(“/containers/” + containerName + “/rows”) .body(requestBody) .retrieve() .toEntity(AcquireRowsResponse.class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute GET request”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public SQLSelectResponse[] select(List sqlStmts) { try { ResponseEntity responseEntity = restClient .post() .uri(“/sql/dml/query”) .body(sqlStmts) .retrieve() .toEntity(SQLSelectResponse[].class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute /sql/dml/query”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public SqlExecutionResult[] executeSqlDDL(List sqlStmts) { try { ResponseEntity responseEntity = restClient.post().uri(“/sql/ddl”).body(sqlStmts).retrieve().toEntity(SqlExecutionResult[].class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute SQL DDL”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public SQLUpdateResponse[] executeSQLUpdate(List sqlStmts) { try { ResponseEntity responseEntity = restClient .post() .uri(“/sql/dml/update”) .body(sqlStmts) .retrieve() .toEntity(SQLUpdateResponse[].class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute /sql/dml/update”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } } The constructor takes a RestClient that was named GridDbRestClient. The @Qualifier makes sure we get the correct one. Every method follows the same safe structure: Try to send an HTTP request using restClient. If something goes wrong (network issue, wrong data, server error), catch the exception. Data Model using DTOs Now, let’s create the Data Transfer Objects (DTOs). DTOs are simple classes that carry information from one part of the app to another, for example, from the database to the screen. In this project, the DTOs represent important things like users, skills, organizations, and volunteer events. Each DTO has its own fields to hold the data. Each DTO matches the structure of rows inside one GridDB container. UserDTO: represents a user in the system, such as a volunteer or an organization admin. It’s used to create, update, or display user information. public class UserDTO { @Size(max = 255) @UserIdValid private String id; @NotNull @Size(max = 255) @UserEmailUnique private String email; @NotNull @Size(max = 255) private String fullName; @NotNull private UserRole role; // Setter and Getter } SkillDTO: represents a skill that volunteers can have, such as “First Aid” or “Paramedic.” It’s used to manage the list of available skills. public class SkillDTO { @Size(max = 255) private String id; @NotNull @Size(max = 255) @SkillNameUnique private String name; public SkillDTO() {} public SkillDTO(String id, String name) { this.id = id; this.name = name; } // Setter and Getter } VolunteerSkillDTO: links a user (volunteer) to a specific skill. It includes details like when the skill expires and its verification status. It’s useful for tracking what skills a volunteer has and their validity. OrganizationDTO: represents an organization that creates volunteer opportunities. It’s used to manage organization details. OrganizationMemberDTO: links a user to an organization, specifying their role within it (e.g., member or admin). It’s used to manage who belongs to which organization. OpportunityDTO: represents a volunteer opportunity, like an event that needs volunteers. It’s used to create and display opportunities. OpportunityRequirementDTO: specifies the skills required for a volunteer opportunity. It links an opportunity to skills and indicates if a skill is mandatory. RegistrationDTO: represents a volunteer’s registration for an opportunity. It tracks who signed up and the status of their registration. Service Layer and Business Logic Next, we implement the service layer. The services will utilize these DTOs to handle business logic, communicate with GridDB Cloud through our client, and prepare data for the controllers. The Service class does not use any repository layer like JPA. Instead, it directly connects to GridDB, which is a database, using a GridDbClient. The Service class implements the interface, which means it must provide methods like findAll() to get all rows and get() to find one by ID, create to add a new row, and others. When fetching data, it sends requests to GridDB to get rows, then maps those rows into DTO objects. For saving or updating, it builds a string in JSON format with data and sends it to GridDB Cloud. It also generates unique IDs using TsidCreator and handles date times carefully by parsing and formatting them. @Service public class RegistrationGridDBService implements RegistrationService { private final Logger log = LoggerFactory.getLogger(getClass()); private final GridDbClient gridDbClient; private final String TBL_NAME = “VoMaRegistrations”; public RegistrationGridDBService(final GridDbClient gridDbClient) { this.gridDbClient = gridDbClient; } public void createTable() { List<GridDbColumn> columns = List.of( new GridDbColumn(“id”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“userId”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“opportunityId”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“status”, “STRING”), new GridDbColumn(“registrationTime”, “TIMESTAMP”)); GridDbContainerDefinition containerDefinition = GridDbContainerDefinition.build(TBL_NAME, columns); this.gridDbClient.createContainer(containerDefinition); } @Override public List<RegistrationDTO> findAll() { AcquireRowsRequest requestBody = AcquireRowsRequest.builder().limit(50L).sort(“id ASC”).build(); AcquireRowsResponse response = this.gridDbClient.acquireRows(TBL_NAME, requestBody); if (response == null || response.getRows() == null) { log.error(“Failed to acquire rows from GridDB”); return List.of(); } return response.getRows().stream() .map(row -> { return extractRowToDTO(row); }) .collect(Collectors.toList()); } private RegistrationDTO extractRowToDTO(List<Object> row) { RegistrationDTO dto = new RegistrationDTO(); dto.setId((String) row.get(0)); dto.setUserId((String) row.get(1)); dto.setOpportunityId((String) row.get(2)); try { dto.setStatus(RegistrationStatus.valueOf(row.get(3).toString())); } catch (Exception e) { dto.setStatus(null); } try { dto.setRegistrationTime(DateTimeUtil.parseToLocalDateTime(row.get(4).toString())); } catch (Exception e) { dto.setRegistrationTime(null); } return dto; } @Override public RegistrationDTO get(final String id) { AcquireRowsRequest requestBody = AcquireRowsRequest.builder() .limit(1L) .condition(“id == ‘” + id + “‘”) .build(); AcquireRowsResponse response = this.gridDbClient.acquireRows(TBL_NAME, requestBody); if (response == null || response.getRows() == null) { log.error(“Failed to acquire rows from GridDB”); throw new NotFoundException(“Registration not found with id: ” + id); } return response.getRows().stream() .findFirst() .map(row -> { return extractRowToDTO(row); }) .orElseThrow(() -> new NotFoundException(“Registration not found with id: ” + id)); } public String nextId() { return TsidCreator.getTsid().format(“reg_%s”); } @Override public String register(String userId, String opportunityId) { RegistrationDTO registrationDTO = new RegistrationDTO(); registrationDTO.setUserId(userId); registrationDTO.setOpportunityId(opportunityId); registrationDTO.setStatus(RegistrationStatus.PENDING); registrationDTO.setRegistrationTime(LocalDateTime.now()); return create(registrationDTO); } } Implement the validation We create a dedicated Service class for validating volunteer registration requests against opportunity requirements. Some benefit from this approach: Hides the complexity. If the rules change later (e.g., “User needs 2 out of 3 skills”), we only change that one place Business validation logic isolated from HTTP concern Validation service can be used by REST APIs or other controllers Service can be unit tested independently Clear, focused exception handling with rich context @Service public class RegistrationValidationService { private final Logger log = LoggerFactory.getLogger(getClass()); private final RegistrationService registrationService; private final OpportunityService opportunityService; private final OpportunityRequirementService opportunityRequirementService; private final VolunteerSkillService volunteerSkillService; private final SkillService skillService; public RegistrationValidationService( final RegistrationService registrationService, final OpportunityService opportunityService, final OpportunityRequirementService opportunityRequirementService, final VolunteerSkillService volunteerSkillService, final SkillService skillService) { this.registrationService = registrationService; this.opportunityService = opportunityService; this.opportunityRequirementService = opportunityRequirementService; this.volunteerSkillService = volunteerSkillService; this.skillService = skillService; } public void validateRegistration(final String userId, final String opportunityId) { // Check 1: User not already registered validateNotAlreadyRegistered(userId, opportunityId); // Check 2: Opportunity has available slots validateSlotsAvailable(opportunityId); // Check 3: User has mandatory skills validateMandatorySkills(userId, opportunityId); } private void validateNotAlreadyRegistered(final String userId, final String opportunityId) { Optional existingReg = registrationService.getByUserIdAndOpportunityId(userId, opportunityId); if (existingReg.isPresent()) { throw new AlreadyRegisteredException(userId, opportunityId); } } private void validateSlotsAvailable(final String opportunityId) { OpportunityDTO opportunity = opportunityService.get(opportunityId); Long registeredCount = registrationService.countByOpportunityId(opportunityId); if (registeredCount >= opportunity.getSlotsTotal()) { throw new OpportunitySlotsFullException(opportunityId, opportunity.getSlotsTotal(), registeredCount); } } private void validateMandatorySkills(final String userId, final String opportunityId) { List userSkills = volunteerSkillService.findAllByUserId(userId); List opportunityRequirements = opportunityRequirementService.findAllByOpportunityId(opportunityId); for (OpportunityRequirementDTO requirement : opportunityRequirements) { if (!requirement.getIsMandatory()) { continue; } boolean hasSkill = userSkills.stream() .anyMatch(userSkill -> userSkill.getSkillId().equals(requirement.getSkillId())); if (!hasSkill) { SkillDTO skill = skillService.get(requirement.getSkillId()); String skillName = skill != null ? skill.getName() : “Unknown Skill”; throw new MissingMandatorySkillException(userId, opportunityId, requirement.getSkillId(), skillName); } } } } This service depends on 5 collaborating services (Opportunity, OpportunityRequirement, VolunteerSkill, Skill, Registration) and throws custom exceptions for each validation failure, allowing callers to handle different error scenarios appropriately (e.g., different error messages, logging, etc). HTTP Layer Now, we need a class that handles all incoming web requests, processes user input, and sends back responses. It’s the bridge between the user’s browser and the application’s code logic. @Controller @RequestMapping(“/opportunities”) public class OpportunityController { private final Logger log = LoggerFactory.getLogger(getClass()); private final OpportunityService opportunityService; private final RegistrationService registrationService; private final RegistrationValidationService registrationValidationService; private final UserService userService; private final OpportunityRequirementService opportunityRequirementService; private final SkillService skillService; public OpportunityController( final OpportunityService opportunityService, final RegistrationService registrationService, final RegistrationValidationService registrationValidationService, final UserService userService, final OpportunityRequirementService opportunityRequirementService, final SkillService skillService) { this.opportunityService = opportunityService; this.registrationService = registrationService; this.registrationValidationService = registrationValidationService; this.userService = userService; this.opportunityRequirementService = opportunityRequirementService; this.skillService = skillService; } @GetMapping public String list(final Model model, @AuthenticationPrincipal final CustomUserDetails userDetails) { List allOpportunities = new ArrayList(); UserDTO user = userDetails != null ? userService.getOneByEmail(userDetails.getUsername()).orElse(null) : null; if (userDetails != null && userDetails.getOrganizations() != null && !userDetails.getOrganizations().isEmpty()) { OrganizationDTO org = userDetails.getOrganizations().get(0); model.addAttribute(“organization”, org); allOpportunities = opportunityService.findAllByOrgId(org.getId()); } else { model.addAttribute(“organization”, null); allOpportunities = opportunityService.findAll(); } List opportunities = extractOpportunities(allOpportunities, user); model.addAttribute(“opportunities”, opportunities); return “opportunity/list”; } @GetMapping(“/add”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String add( @ModelAttribute(“opportunity”) final OpportunityDTO opportunityDTO, final Model model, @AuthenticationPrincipal final CustomUserDetails userDetails) { if (userDetails != null && userDetails.getOrganizations() != null && !userDetails.getOrganizations().isEmpty()) { OrganizationDTO org = userDetails.getOrganizations().get(0); opportunityDTO.setOrgId(org.getId()); } opportunityDTO.setId(opportunityService.nextId()); return “opportunity/add”; } @PostMapping(“/add”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String add( @ModelAttribute(“opportunity”) @Valid final OpportunityDTO opportunityDTO, final BindingResult bindingResult, final RedirectAttributes redirectAttributes) { if (bindingResult.hasErrors()) { return “opportunity/add”; } opportunityService.create(opportunityDTO); redirectAttributes.addFlashAttribute(WebUtils.MSG_SUCCESS, WebUtils.getMessage(“opportunity.create.success”)); return “redirect:/opportunities”; } @GetMapping(“/edit/{id}”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String edit(@PathVariable(name = “id”) final String id, final Model model) { model.addAttribute(“opportunity”, opportunityService.get(id)); return “opportunity/edit”; } @PostMapping(“/edit/{id}”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String edit( @PathVariable(name = “id”) final String id, @ModelAttribute(“opportunity”) @Valid final OpportunityDTO opportunityDTO, final BindingResult bindingResult, final RedirectAttributes redirectAttributes) { if (bindingResult.hasErrors()) { return “opportunity/edit”; } opportunityService.update(id, opportunityDTO); redirectAttributes.addFlashAttribute(WebUtils.MSG_SUCCESS, WebUtils.getMessage(“opportunity.update.success”)); return “redirect:/opportunities”; } @PostMapping(“/{id}/registrations”) public String registrations( @PathVariable(name = “id”) final String opportunityId, final RedirectAttributes redirectAttributes, @AuthenticationPrincipal final UserDetails userDetails) { UserDTO user = userService .getOneByEmail(userDetails.getUsername()) .orElseThrow(() -> new UsernameNotFoundException(“User not found”)); try { // Validate registration using the validation service registrationValidationService.validateRegistration(user.getId(), opportunityId); // If validation passes, proceed with registration OpportunityDTO opportunityDTO = opportunityService.get(opportunityId); registrationService.register(user.getId(), opportunityId); log.debug( “Registration Successful – user: {}, opportunity: {}”, user.getFullName(), opportunityDTO.getTitle()); redirectAttributes.addFlashAttribute( WebUtils.MSG_INFO, WebUtils.getMessage(“opportunity.registrations.success”)); return “redirect:/opportunities/” + opportunityId; } catch (AlreadyRegisteredException e) { redirectAttributes.addFlashAttribute( WebUtils.MSG_ERROR, WebUtils.getMessage(“opportunity.registrations.already_registered”)); return “redirect:/opportunities/” + opportunityId; } catch (OpportunitySlotsFullException e) { redirectAttributes.addFlashAttribute( WebUtils.MSG_ERROR, WebUtils.getMessage(“opportunity.registrations.full”)); return “redirect:/opportunities/” + opportunityId; } catch (MissingMandatorySkillException e) { redirectAttributes.addFlashAttribute( WebUtils.MSG_ERROR, WebUtils.getMessage(“opportunity.registrations.missing_skill”, e.getSkillName())); return “redirect:/opportunities/” + opportunityId; } } } The OpportunityController.java: Doesn’t do the work itself; it delegates to specialized services. This keeps code organized and reusable. Manages everything related to /opportunities URLs, for example, listing volunteer opportunities. Receives services it needs using constructor injection. @PreAuthorize ensures only authorized users perform actions Validate registration using registrationValidationService. If validation fails, catches specific exceptions and shows error messages. Clean Controller: focus on orchestration only User Interface Preview Listing opportunity page: Register page: Conclusion Building a volunteer-matching web for a health event is a practical project that trains core skills: Spring Boot service design, server-rendered Thymeleaf UI, Cloud NoSQL integration, and RBAC. Feel free to add more feature like email notifications or calendar integration. Keep building, keep

Modern Ecological Research is largely data-driven, with actionable insights and decisions made using massive, complex datasets.

With GridDB Cloud now having the ability to connect to your code through what is known as non-webapi, aka through its native NoSQL interface (Java, Python, etc), we can now explore connecting to various Azure Services through the virtual network peering. Because our GridDB Cloud instance is connected to anything connected to our Virtual Network thanks to the peering connection, anything that allows connection to a virtual network can now directly communicate with GridDB Cloud. Note: this is only available for the GridDB Cloud offered at Microsoft Azure Marketplace; the GridDB Cloud Free Plan from the Toshiba page does not support VNET peering. Source code found on the griddbnet github: $ git clone https://github.com/griddbnet/Blogs.git –branch azure_connected_services Introduction In this article, we will explore connecting our GridDB Cloud instance to Azure’s IoT Hub to store telemetry data. We have previously made a web course on how to set up the Azure IoT Hub with GridDB Cloud but through the Web API. That can be found here: https://www.udemy.com/course/griddb-and-azure-iot-hub/?srsltid=AfmBOopFTwFHI7OvQOEXt4P_cWxuo3NaJ9XkbNDHHWX5Tgky4QZzJlD3. You can also learn about how to connect your GridDB Cloud instance to your Azure virtual network through the v-net peering here: https://griddb.net/en/blog/griddb-cloud-v3-1-how-to-use-the-native-apis-with-azures-vnet-peering/. As a bonus, we have also made a blog on how to connect your local environment to your cloud-hosted GridDB instance through a VPN to be able to just use your local programming environment; blog here: GridDB Cloud v3.1 – How to Use the Native APIs with Azure’s VNET Peering So for this one, let’s get started with our IoT Hub implementation. We will be setting up an IoT Hub with any number of devices which will trigger a GridDB write whenever telemetry data is detected. We will then also set up another Azure Function which will run on a simple timer (every 1 hour) that will run a simple aggregation of the IoT Sensor data to keep the data tidy and data analysis. There is also source code for setting up a Kafka connection through a timer which will read all data from within the past 5 minutes and stream that data out through Kafka, but we won’t discuss it here. Azure’s Cloud Infrastructure Let’s talk briefly about Azure’s services that we will need to master and use to get all of this running. First, the IoT Hub Azure’s IoT Hub You can read about what the IoT Hub does here: https://learn.microsoft.com/en-us/azure/iot-hub/. The purpose of it is to make it easy to manage a fleet of IoT sensors which exist in the real world, emitting data at intervals which needs to be stored and analyzed. For this article, we will simply create one virtual device of which we will push data through a python script provided by Microsoft (source code here: https://github.com/Azure/azure-iot-sdk-python). You can learn how to create the IoT Hub and how to deploy code/functions through an older blog: https://griddb.net/en/blog/iot-hub-azure-griddb-cloud/. Because this information is here, we will continue on assuming you have already built the IoT Hub in your Azure and we will just discuss the source code needed to get our data to GridDB Cloud through the native Java API. Azure Functions The real glue of this set up is our Azure Functions. For this article, I created an Azure Function Standard Plan. From there, I connected the standard plan to the virtual network which is already peer-connected to our GridDB Cloud instance. With this simple step, all of our Azure Functions which we deploy and use on this app service plan will already be able to communicate with GridDB Cloud seamlessly. And for our Azure Function that combines with the IoT Hub to detect events and use that data to run some code, we will use a specific function binding in our java code: @EventHubTrigger(name = “message”, eventHubName = “events”, connection = “IotHubConnectionString”, consumerGroup = “myfuncapp-cg”, cardinality = Cardinality.ONE) String message,. In this case, we are telling our Azure Function that whenever an event occurs in our IoT Hub (as defined in the IoTHubConnectionString), we want to run the following code. The magic is all contained within Azure Functions and that IoTHubConnectionString, which is gathered in the IoT Hub called: primary connection string. So in your Azure Function, when you create it, you should head to Settings -> Environment Variables. And set the IoTHubConnectionString as well as your GridDB Cloud credentials. If you are using VSCode, you can set these vars in your local.settings.json file created when you select Azure Functions Create Function App (as mentioned in the blog linked above) and then do Azure Functions: Deploy Local Settings. IoT Hub Event Triggering Now let’s look at the actual source code that pushes data to GridDB Cloud. Java Source Code for Pushing Event Telemetry Data Our goal here is to log all of our sensors’ within the IoT Hub’s data into persistent storage (aka GridDB Cloud). To do this, we use Azure Functions and their special bindings/triggers. In this case, we want to detect whenever our IoT Hub’s sensors receive telemetry data, which will then fire off our java code which will forge a connection to GridDB through its NoSQL interface and simply write that row of data. Here is the main method in Java: public class IotTelemetryHandler { private static GridDB griddb = null; private static final ObjectMapper MAPPER = new ObjectMapper(); @FunctionName(“IoTHubTrigger”) public void run( @EventHubTrigger(name = “message”, eventHubName = “events”, connection = “IotHubConnectionString”, consumerGroup = “myfuncapp-cg”, cardinality = Cardinality.ONE) String message, @BindingName(“SystemProperties”) Map properties, final ExecutionContext context) { TelemetryData data; try { data = MAPPER.readValue(message, TelemetryData.class); } catch (Exception e) { context.getLogger().severe(“Failed to parse JSON message: ” + e.getMessage()); context.getLogger().severe(“Raw Message: ” + message); return; } try { context.getLogger().info(“Java Event Hub trigger processed a message: ” + message); String deviceId = properties.get(“iothub-connection-device-id”).toString(); String eventTimeIso = properties.get(“iothub-enqueuedtime”).toString(); Instant enqueuedInstant = Instant.parse(eventTimeIso); long eventTimeMillis = enqueuedInstant.toEpochMilli(); Timestamp dbTimestamp = new Timestamp(eventTimeMillis); data.ts = dbTimestamp; context.getLogger().info(“Data received from Device: ” + deviceId); griddb = new GridDB(); String containerName = “telemetryData”; griddb.CreateContainer(containerName); griddb.WriteToContainer(containerName, data); context.getLogger().info(“Successfully saved to DB.”); } catch (Throwable t) { context.getLogger().severe(“CRITICAL: Function execution failed with exception:”); context.getLogger().severe(t.toString()); // throw new RuntimeException(“GridDB processing failed”, t); } } } The Java code itself is vanilla, it’s what the Azure Functions bindings do that is the real magic. As explained above, using the IoT Hub connection string directs what events are being polled to grab those values and eventually be written to GridDB. Data Aggregation So now we’ve got thousands of rows of data from our sensors inside of our DB. A typical workflow in this scenario might be a separate service which runs aggregations on a timer to help manage the data or keep around an easy-to-reference snapshot of the data in your sensors. Python is a popular vehicle for running data-science-y type operations, so let’s set up the GridDB Python client and let’s run a simple average function every hour. Python Client While using Java in the Azure function works out of the box (as shown above), the python client has some requirements for installing and being run. Specifically, we need to actually have Java installed, as well as some special-built java jar files. The easiest way to get this sort of environment set up in an Azure Function is to use Docker. With Docker, we can include all of the libraries and instructions needed to install the python client and deploy the container with all source code as is. The Python script will then run on a timer every 1 hour and write to a new GridDB Cloud table which will keep track of the hourly aggregates of each data point. Dockerize Python Client To dockerize our python client, we need to convert the instructions on how to install the python client into docker instructions, as well as copy the source code and credentials. Here is what the Dockerfile looks like: FROM mcr.microsoft.com/azure-functions/python:4-python3.12 ENV AzureWebJobsScriptRoot=/home/site/wwwroot \ AzureFunctionsJobHost__Logging__Console__IsEnabled=true ENV PYTHONBUFFERED=1 ENV GRIDDB_NOTIFICATION_PROVIDER=”[notification_provider]” ENV GRIDDB_CLUSTER_NAME=”[clustername]” ENV GRIDDB_USERNAME=”[griddb-user]” ENV GRIDDB_PASSWORD=”[password]” ENV GRIDDB_DATABASE=”[database]” WORKDIR /home/site/wwwroot RUN apt-get update && \ apt-get install -y default-jdk git maven && \ rm -rf /var/lib/apt/lists/* ENV JAVA_HOME=/usr/lib/jvm/default-java WORKDIR /tmp RUN git clone https://github.com/griddb/python_client.git && \ cd python_client/java && \ mvn install RUN mkdir -p /home/site/wwwroot/lib && \ mv /tmp/python_client/java/target/gridstore-arrow-5.8.0.jar /home/site/wwwroot/lib/gridstore-arrow.jar WORKDIR /tmp/python_client/python RUN python3.12 -m pip install . WORKDIR /home/site/wwwroot COPY ./lib/gridstore.jar /home/site/wwwroot/lib/ COPY ./lib/arrow-memory-netty.jar /home/site/wwwroot/lib/ COPY ./lib/gridstore-jdbc.jar /home/site/wwwroot/lib/ COPY *.py . COPY requirements.txt . RUN python3.12 -m pip install -r requirements.txt ENV CLASSPATH=/home/site/wwwroot/lib/gridstore.jar:/home/site/wwwroot/lib/gridstore-jdbc.jar:/home/site/wwwroot/lib/gridstore-arrow.jar:/home/site/wwwroot/lib/arrow-memory-netty.jar Once in place, you do the normal docker build, docker tag, docker push. But there is one caveat! Azure Container Registry Though not necessary, setting up your own Azure Container Registry(acr) (think Dockerhub) to host your images makes life a whole lot simpler for deploying your code to Azure Functions. So in my case, I set up an acr, and then pushed my built images into that repository. Once there, I went to the deployment center of my new python Azure Function and selected my container’s name etc. From there, it will deploy and run based on your stipulations. Cool! Python Code to do Data Aggregation Similar to the Java implementation above, we will use the Azure Function bindings/trigger on the python code to use a cron-style layout for the timer. Under the hood, the Azure Function infrastructure will run the function every 1 hour based on our setting. The code itself is also vanilla: we will query data from our table written to above from the past 1 hour, find the averages, and then write that data back to GridDB Cloud on another table. Note that since this function solely relies on the Azure Function timer and GridDB, there is no need for special IoT Hub Connection String-type connection strings to grab. Here is the main python that Azure will run when the time is right: import logging import azure.functions as func import griddb_python as griddb from griddb_connector import GridDB from griddb_sql import GridDBJdbc from datetime import datetime import pyarrow as pa import pandas as pd import sys app = func.FunctionApp() @app.timer_trigger(schedule=”0 0 * * * *”, arg_name=”myTimer”, run_on_startup=True, use_monitor=False) def aggregations(myTimer: func.TimerRequest) -> None: if myTimer.past_due: logging.info(‘The timer is past due!’) logging.info(‘Python timer trigger function executed.’) nosql = None store = None ra = None griddb_jdbc = None try: print(“Attempting to connect to GridDB…”) nosql = GridDB() store = nosql.get_store() ra = griddb.RootAllocator(sys.maxsize) if not store: print(“Connection failed. Exiting script.”) sys.exit(1) griddb_jdbc = GridDBJdbc() if griddb_jdbc.conn: averages = griddb_jdbc.calculate_avg() nosql.pushAvg(averages) print(“\nScript finished successfully.”) except Exception as e: print(f”A critical error occurred in main: {e}”) finally: print(“Script execution complete.”) The rest of the code isn’t very interesting but let’s take a brief look. Here we are querying the last 1 hour of data and calculating the averages: def calculate_avg(self): try: curs = self.conn.cursor() queryStr = ‘SELECT temperature, pressure, humidity FROM telemetryData WHERE ts BETWEEN TIMESTAMP_ADD(HOUR, NOW(), -1) AND NOW();’ curs.execute(queryStr) if curs.description is None: print(“Query returned no results or failed.”) return None column_names = [desc[0] for desc in curs.description] all_rows = curs.fetchall() if not all_rows: print(“No data found for the query range.”) return None results = {name.lower(): [] for name in column_names} for row in all_rows: for i, name in enumerate(column_names): results[name.lower()].append(row[i]) averages = { ‘temperature’: statistics.mean(results[‘temperature’]), ‘humidity’: statistics.mean(results[‘humidity’]), ‘pressure’: statistics.mean(results[‘pressure’]) } return averages Note: for this function, we created the table beforehand (not in the Python code). Bonus Azure Kafka Event Hub We also set up an Event Hub function to query the last 5 minutes of telemetry data and stream it through Kafka. We ended up leaving this as dangling, but I’ve included it here because the source code already exists. It also uses a timer trigger and relies solely on the connection to GridDB Cloud. Azure’s Event Hub handles all of the complicated Kafka stuff under the hood, we just needed to return the data to be pushed through Kafka. Here is the source code: package net.griddb; import com.microsoft.azure.functions.ExecutionContext; import com.microsoft.azure.functions.annotation.EventHubOutput; import com.microsoft.azure.functions.annotation.FunctionName; import com.microsoft.azure.functions.annotation.TimerTrigger; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import java.util.logging.Level; public class GridDBPublisher { @FunctionName(“GridDBPublisher”) @EventHubOutput(name = “outputEvent”, eventHubName = “griddb-telemetry”,

GridDB Cloud version 3.1 is now out and there are some new features we would like to showcase, namely the main two: the ability to connect to your Cloud instance via an Azure Virtual Network (vnet) peering connection, and then also for a new way to authenticate your Web API Requests with web tokens (bearer tokens). In this article, we will first go through setting up a vnet peering connection to your GridDB Cloud Pay as you go plan. Then, we will briefly cover the sample code attached with this article. And then lastly, we will go over the new authentication method for the Web API and an example of implementing it into your app/workflow. Vnet Peering What exactly is a vnet peering connection? Here is the strict definition from the Azure docs: “Azure Virtual Network peering enables you to seamlessly connect two or more virtual networks in Azure, making them appear as one for connectivity purposes. This powerful feature allows you to create secure, high-performance connections between virtual networks while keeping all traffic on Microsoft’s private backbone infrastructure, eliminating the need for public internet routing.” In our case, we want to forge a secure connection between our virtual machine and the GridDB Cloud instance. With this in place, we can connect to GridDB Cloud via a normal java/python source code and use it outside of the context of the Web API. How to Set Up a Virtual Network Peering Connection Before you start on the GridDB Cloud side, you will first need to create some resources on the Microsoft Azure side. On Azure, create a Virtual Network (VNet). Azure Resources to Create Virtual Network Peering Connection You will also need to eventually set up a virtual machine that is on the same virtual network that you just created (which will be the one you link with GridDB Cloud below). With these two resources created, let’s move on to the GridDB Cloud side. > GridDB Cloud — Forging Virtual Network Peering Connection For this part of the process, you can read step-by-step instructions here in the official docs: https://www.toshiba-sol.co.jp/pro/griddbcloud/docs-en/v3_1/cloud_quickstart_guide_html/GridDB_Cloud_QuickStartGuide.html#connection-settings-for-vnet. From the network tab, click ‘Create peering connection’ and you’ll see something like this pop up. Here’s a summary of the steps needed to forge the vnet peering connection: From the navigation menu, select Network Access and then click the CREATE PEERING CONNECTION button. When the cloud provider selection dialog appears, leave the settings as-is and click NEXT. On the VNet Settings screen, enter your VNet information: Subscription ID Tenant ID Resource group name VNet name Click NEXT Run the command provided in the dialog to establish the VNet peering. You can run this in either: Azure Cloud Shell (recommended) Azure command-line interface (Azure CLI) After running the command, go to the VNet peering list screen and verify that the Status of the connection is Connected. All of these steps are also plainly laid out in the GridDB Cloud UI — it should be a fairly simple process through and through. Using the Virtual Network Peering Connection Once your connection is forged, you should be able to connect to the GridDB Cloud instance using the notification provider address provided in your GridDB Cloud UI Dashboard. If you want to do a very quick check before trying to run sample code, you can download the json file and use a third-party network tool like telnet to see if the machine is reachable. For example, download the json file, find the transaction address and port number, and then run telnet: $ wget [notifification-provider-url] $ cat mfcloud2102.json { “cluster”: {“address”:”172.26.30.69″, “port”:10010}, “sync”: {“address”:”172.26.30.69″, “port”:10020}, “system”: {“address”:”172.26.30.69″, “port”:10040}, “transaction”: {“address”:”172.26.30.69″, “port”:10001}, “sql”: {“address”:”172.26.30.69″, “port”:20001} }, $ telnet 172.26.30.69 10001 If a connection is made, everything is working properly. Sample Code And now you can run the sample code included in this repo. There is Java code for both NoSQL and SQL interfaces as well as Python code. To run the java code, you need to install java and maven. The sample code will create tables and read those tables, but also expects for you to ingest a csv dataset from Kaggle: https://www.kaggle.com/code/chaozhuang/iot-telemetry-sensor-data-analysis. We have included the csv file with this repo. Ingesting IoT Telemetry Data To ingest the dataset, first navigate to the griddbCloudDataImport dir inside of this repository. Within this dir, you will find that the GridDB Cloud Import tool is already installed. To use, open up the .sh file and edit lines 34-36 to include your credentials. Next, you must run the python file within this directory to clean up the data, namely changing the timestamp column to better adhere to the Web API’s standard, and also to separate out the csv into 3 distinct csv files, one for each device found in the dataset. Here is the line of code that will transform the ts col into something the GridDB Web API likes. df[‘ts’] = pd.to_datetime(df[‘ts’], unit=’s’).dt.strftime(‘%Y-%m-%dT%H:%M:%S.%f’).str[:-3] + ‘Z’ You can make out from the format that it needs it to be like this: “2020-07-12T00:01:34.385Z” so that it can be ingested using the tool. The data which is supplied directly in the CSV before transformation looks like this: “1.5945120943859746E9”. Next, you must create the container within your GridDB Cloud. For this part, you can use the GridDB Cloud CLI tool or simply use the GridDB Cloud UI. The schema we want to ingest is found within the schema.json file in the directory. { “container”: “device1”, “containerType”: “TIME_SERIES”, “columnSet”: [ { “columnName”: “ts”, “type”: “timestamp”, “notNull”: true }, { “columnName”: “co”, “type”: “double”, “notNull”: false }, { “columnName”: “humidity”, “type”: “double”, “notNull”: false }, { “columnName”: “light”, “type”: “bool”, “notNull”: false }, { “columnName”: “lpg”, “type”: “double”, “notNull”: false }, { “columnName”: “motion”, “type”: “bool”, “notNull”: false }, { “columnName”: “smoke”, “type”: “double”, “notNull”: false }, { “columnName”: “temp”, “type”: “double”, “notNull”: false } ], “rowKeySet”: [ “ts” ] } To create the table with the tool, simply run: $ griddb-cloud-cli create schema.json. If using the UI, simply follow the UI prompts and follow the schema as shown here. Name the container device1. Once the container is ready to go, you can run the import tool: $ ./griddbCloudDataImport.sh device1 device1.csv. You can also use the GridDB Cloud CLI Tool to import: $ griddb-cloud-cli ingest device1.csv. Running The Sample Code (Java) Now that we have device1 defined and populated with data, let’s try running our sample code. $ cd sample-code/java $ export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-23.jdk/Contents/Home $ mvn clean package And once the code is compiled, you need to set up your database parameters as your environment variables, and then from there, run the code: export GRIDDB_NOTIFICATION_PROVIDER=”[notification-provider-address]” export GRIDDB_CLUSTER_NAME=”[clustername]” export GRIDDB_USERNAME=”[username]” export GRIDDB_PASSWORD=”[password]” export GRIDDB_DATABASE=”[database name]” $ java -jar target/java-samples-1.0-SNAPSHOT-jar-with-dependencies.jar Running the sample code will run all of the java code, which includes connections to both JDBC and the NoSQL Interface. Here’s a quick breakdown of the Java Sample files included with this repo: The code in App.java simply runs the main function and all of the methods within the individual classes. The code in Device.java is the class schema for the dataset we ingest The code in GridDB.java is the NoSQL interface, connecting, creating tables, and querying from the dataset ingested above. The code also shows multiput, multiget, and various aggregation, time sampling etc examples. GridDBJdb.java shows connecting to GridDB via the JDBC interface and also shows creating a table and querying a table. Also shows a GROUP BY RANGE SQL query. Running The Sample Code (Python) For the python code, you will need to first install the Python client. There are good instructions found in the docs page: https://docs.griddb.net/gettingstarted/python.html. The following instructions are for debian-based systems: sudo apt install default-jdk export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 git clone https://github.com/griddb/python_client.git cd python_client/java mvn install cd .. cd python python3.12 -m pip install . cd .. And once installed, you will need to have the .jar files in your $CLASSPATH. To make it easy, we hve included the jars inside of the lib dir inside the python dir and a script to add the variables to your classpath: $ source ./set-env.sh $ echo $CLASSPATH Once you’ve got everything done, you will need your creds set up in your environment variables, similar to the Java section above. And from there, you can run the code: $ python3.12 main.py Web API (Basic Authentication vs. Bearer Tokens) The GridDB Web API is one of the methods of orchestrating your CRUD methods for your GridDB Cloud instance. Prior to this release, the method of authenticating your HTTP Requests to your GridDB Cloud was solely using something called Basic Authentication, which is the method of attaching your Username and Password to each web request paired with an IP filtering firewall. Though this method was enough to keep things secure up until now, the GridDB team’s release of utilizing Web Tokens greatly bolsters the safety in authentication strategy for GridDB Cloud. The Dangers of Basic Authentication Before I get into how to work the Bearer token into your work flow, I will showcase a simple example of why Basic Authentication can be problematic and lead to issues down the line. Sending your user/pass credentials in every request leaves you extremely vulnerable to man-in-the-middle attacks. Not only that, but there’s issues with sometimes servers keeping logs of all headers incoming into it, meaning your user:pass combo could potentially be stored in plaintext somewhere on some server. If you share passwords at all, your entire online presense could be compromised. I asked an LLM to put together a quick demo of a server reading your user and password combo. The LLM produced a server and a client; the client sends its user:pass as part of its headers and the server is able to intercept those and decode and store the user:pass in plaintext! After that, we send another request with a bearer token, and though the server can still read and intercept that, bearer tokens will naturally expire and won’t potentially expose your password which may be used in other apps or anything else. Here is the output of the server: go run server.go Smarter attacker server listening on http://localhost:8080 — NEW REQUEST RECEIVED — Authorization Header: Basic TXlVc2VybmFtZTpNeVN1cGVyU2VjcmV0UGFzc3dvcmQxMjM= !!! ð± BASIC AUTH INTERCEPTED !!! !!! DECODED: MyUsername:MySuperSecretPassword123 — NEW REQUEST RECEIVED — Authorization Header: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJpc3JhZWwiLCJleHAiOjE3NjE3NjA3NjZ9.fake_signature_part I think this example is clear, sending your password in every request can cause major leakage of your secret data. This is bad! Bearer Tokens As shown in the example above, a Bearer token is attached to the requests’s header, similar to the Basic Auth method, but it’s labeled as such (Bearer) and the code itself is a string that must be decoded and decrypted by the server. And though, as explaiend above, you do face a similar threat of somebody stealing your bearer token and being able to impersonate you, this damage is mitigated because the bearer token expires within an hour and doesn’t potentially leak information outside of the scope of this database; not to mention, a bearer token can have granular scopes to make limit damage even further. You can read more about them here, as we wrote about saving these into GridDB here: Protect your GridDB REST API with JSON Web Tokens Protect your GridDB REST API with JSON Web Tokens Part II How to use with GridDB Cloud This assumes you are familiar with GridDB Cloud in general and how to use the Web API. If you are not, please read the quickstart: GridDB Cloud Quickstart. To be issued an auth token from GridDB Cloud, you must use the web api endpoint: https://[cloud-id]griddb.com/griddb/v2/[cluster-name]/authenticate. You make a POST Request to that address with the body being your web api credentials (username, password). If successful, you will receive a JSON response with the access token string (the string you attach with your requests), and an expiry date of that token. From that point on, you can use that bearer token in all of your requests until it expires. Here’s a CURL example: curl –location ‘https://[cloud-id]griddb.com/griddb/v2/[cluster-name]/authenticate.’ –header ‘Content-Type: application/json’ –data ‘{ “username”: “israel”, “password”: “israel” }’ RESPONSE { “accessToken”: “eyJ0eXAiOiJBY2Nlc3MiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJTMDE2anA3WkppLWlzcmFlbCIsImV4cCI6MTc2MTc2MDk4MCwicHciOiJ1ZXNDZlBRaCtFZXdhYjhWeC95SXBnPT0ifQ.1WynKNIwRLM7pOVhAi9itQh35gUnxlzyi85Vhw3xM8E”, “expiredDate”: “2025-10-29 18:03:00.653 +0000″ } How to use the Bearer Token in an Application Now that you know how to use a bearer token, let’s take a look at a hands-on example of incorporating it into your application. In this case, we updated the GridDB Cloud CLI Tool to use bearer tokens instead of Basic Authentication. The workflow is as follows: the CLI tool will send each request with the bearer token, but before it does that, it first checks to see if a valid token exists. If it does, no problem, send the request with the proper header attached. If the token doesn’t exist or is expired, it will first use the user credentials (Saved in a config file), grab a new token, save the contents into a json file, and then now will be read and attached with each request until it expires again. First, we need to create a new type called TokenManager which will handle our tokens, including having methods for saving and loading the access string. type TokenManager struct { AccessToken string `json:”accessToken”` Expiration ExpireTime `json:”expiredDate”` mux sync.RWMutex `json:”-“` buffer time.Duration `json:”-“` } // Here create a new instance of our Token Manager func NewTokenManager() *TokenManager { m := &TokenManager{ buffer: 5 * time.Minute, } if err := m.loadToken(); err != nil { fmt.Println(“No cached token found, will fetch a new one.”) } return m } This struct will save our access token string into the user’s config dir in their filesystem and then load it before every request: func (m *TokenManager) saveToken() error { configDir, err := os.UserConfigDir() if err != nil { return err } cliConfigDir := filepath.Join(configDir, “griddb-cloud-cli”) tokenPath := filepath.Join(cliConfigDir, “token.json”) data, err := json.Marshal(m) if err != nil { return err } return os.WriteFile(tokenPath, data, 0600) } func (m *TokenManager) loadToken() error { configDir, err := os.UserConfigDir() if err != nil { return err } tokenPath := filepath.Join(configDir, “griddb-cloud-cli”, “token.json”) data, err := os.ReadFile(tokenPath) if err != nil { return err } return json.Unmarshal(data, &m) } And here is the code for checking to see if the bearer token is expired: func (m *TokenManager) getAndAddValidToken(req *http.Request) error { m.mux.RLock() needsRefresh := time.Now().UTC().After(m.Expiration.Add(-m.buffer)) m.mux.RUnlock() if needsRefresh { m.mux.Lock() if time.Now().UTC().After(m.Expiration.Add(-m.buffer)) { if err := m.getBearerToken(); err != nil { m.mux.Unlock() return err } } m.mux.Unlock() } m.mux.RLock() defer m.mux.RUnlock() req.Header.Add(“Authorization”, “Bearer “+m.AccessToken) req.Header.Add(“Content-Type”, “application/json”) return nil } And finally, here’s the code for actually grabbing a new bearer token and attached it to each HTTP Request sent by the tool: func (m *TokenManager) getBearerToken() error { fmt.Println(“— REFRESHING TOKEN —“) if !(viper.IsSet(“cloud_url”)) { log.Fatal(“Please provide a `cloud_url` in your config file! You can copy this directly from your Cloud dashboard”) } configCloudURL := viper.GetString(“cloud_url”) parsedURL, err := url.Parse(configCloudURL) if err != nil { fmt.Println(“Error parsing URL:”, err) return err } newPath := path.Dir(parsedURL.Path) parsedURL.Path = newPath authEndpoint, _ := parsedURL.Parse(“./authenticate”) authURL := authEndpoint.String() method := “POST” user := viper.GetString(“cloud_username”) pass := viper.GetString(“cloud_pass”) payloadStr := fmt.Sprintf(`{“username”: “%s”, “password”: “%s” }`, user, pass) payload := strings.NewReader(payloadStr) client := &http.Client{} req, err := http.NewRequest(method, authURL, payload) if err != nil { log.Fatal(err) } defer req.Body.Close() req.Header.Add(“Content-Type”, “application/json”) resp, err := client.Do(req) if err != nil { fmt.Println(“error with client DO: “, err) } CheckForErrors(resp) body, err := io.ReadAll(resp.Body) if err != nil { return err } if err := json.Unmarshal(body, &m); err != nil { log.Fatalf(“Error unmarshaling access token %s”, err) } if err := m.saveToken(); err != nil { fmt.Println(“Warning: Could not save token to cache:”, err) } return nil } func MakeNewRequest(method, endpoint string, body io.Reader) (req *http.Request, e error) { if !(viper.IsSet(“cloud_url”)) { log.Fatal(“Please provide a `cloud_url` in your config file! You can copy this directly from your Cloud dashboard”) } url := viper.GetString(“cloud_url”) req, err := http.NewRequest(method, url+endpoint, body) if err != nil { fmt.Println(“error with request:”, err) return req, err } tokenManager.getAndAddValidToken(req) return req, nil } As usual, source code can be found in the GitHub page for the tool
