
Blog

Building a Smarter Book Inventory using Vaadin and Spring AI
Here is a project showcasing a simple yet powerful project: a smarter website for managing book collections. In this post, we’re going to build this

Here is a project showcasing a simple yet powerful project: a smarter website for managing book collections. In this post, we’re going to build this web application from the ground up. We will use Spring Boot, Spring AI, Vaadin Flow to create it. And we will store the data in the cloud using GridDB Cloud. Core Features So, what exactly are we building? Let’s break it down. Importing Books from CSV A common and easy way to get a list of existing books is often from a CSV file. Think of a simple spreadsheet with columns like `Title` and `Author`. We will build a small part of the application to read this CSV file. This `parser` will read lines, split them by commas, and create `Book` object with `title`, `authors`, `publisher`, and `rating`. This will serve as our initial dataset in the database. It’s the first step before adding more exciting features. AI Enrichment Once we have the initial dataset, we’ll take each book record and look up it’s genre and a brief summary. We’ll use Spring AI to send request to the Open AI API. Then, we’ll read the response, extract the genre and summary, and update the book data. Based on the above features we need the following components: A web interface for listing books, upload the CSV, and book detail page where user ask the AI to update book’s genre and summary by clicking a button. We will develop this UI using Vaadin Flow. CSV parser component An AI Enrichment Service that uses Spring AI to interact with LLM model. A NoSQL Database Service to interact with GridDB Cloud API. We will use Spring `RestClient` to handle the `request` and `response`. This will also convert Java `record` into `HTTP` body according to the API specifications. Tech Stack For this project, I selected: Spring Boot Spring Boot streamlines development with auto-configuration, an embedded server, and starter dependencies. This effortless setup accelerates our ability to build and launch applications swiftly. Vaadin Flow Vaadin Flow serves as a versatile Java UI framework, allowing us to construct web applications purely in Java. This approach simplifies our process by minimizing the hassle of juggling separate frontends. Packed with a treasure trove of pre-built components, Vaadin is tailor-made for data-rich business applications, ensuring users enjoy a seamless experience. Spring AI Spring AI is a powerful extension of the Spring Framework. It empowers Java developers to craft AI-driven applications with minimal reskilling required. By tapping into the strengths of the Spring Framework, Spring AI opens the door to advanced AI features, simplifying the journey to creating intelligent apps. GridDB Cloud GridDB Cloud is a fully managed, cloud-based database offered by GridDB, designed to store and process massive volumes of time-series data in real-time. Read this quick start guide to learn how to use GridDB Cloud. You can sign up for a GridDB Cloud Free instance at this link: https://form.ict-toshiba.jp/downloadformgriddbcloudfreeplan_e. Create a new Vaadin Project To create a Vaadin Flow project, go to start.vaadin.com. This starter project has a basic application with a fully-functional end-to-end workflow. Choose a pure Java option with Vaadin Flow. Click the download button, then unzip and open the project into your favorite IDE. You should now see a typical Maven project as shown below: $ ├───.mvn $ ├── mvnw $ ├── mvnw.cmd $ ├── pom.xml $ ├── README.md $ ├───src $ │ ├───main $ │ │ ├───frontend $ │ │ ├───java $ │ │ └───resources $ │ └───test $ │ ├───java $ │ └───resources Data Access Next, we need a domain object to hold the Book data as follow: public record Book(String id, String title, String authors, String publisher, Double rating, String genres, String summary, Long goodreadsBookId, String goodreadsUrl) { public Book(String id, String title, String authors, String publisher, Double rating, String genres, String summary, Long goodreadsBookId) { this(id, title, authors, publisher, rating, genres, summary, goodreadsBookId, null); } } Next, we create a `BookService.java` class to fetch and store data. Also to pass data to the presentation layer. @Service public class BookService { private final BookContainer bookContainer; public BookService(BookContainer bookContainer) { this.bookContainer = bookContainer; } public List listBooks() { return this.bookContainer.getBooks(); } public Book getBook(String id) { Book book = this.bookContainer.getBook(id); if (book == null) { throw new IllegalArgumentException(“Book with ID ” + id + ” does not exist.”); } return book; } public void createTableBooks() { this.bookContainer.createTableBooks(); } public void saveBooks(List books) { if (books == null || books.isEmpty()) { return; } List newBooks = books.stream().map(book -> { String id = (book.id() != null) ? book.id() : nextId(); return new Book(id, book.title(), book.authors(), book.publisher(), book.rating(), book.genres(), book.summary(), book.goodreadsBookId(), book.goodreadsUrl()); }).collect(Collectors.toList()); this.bookContainer.saveBooks(newBooks); } public static String nextId() { return “book_” + TsidCreator.getTsid().format(“%S”); } } The `saveBooks()` method accept a list of books. Both creating a new book and updating a book will use this method. We generate the book’s ID using Time-Sorted Unique Identifiers (TSID). Next, we create the `BookContainer.java` to interact with the database API: @Service public class BookContainer { private final GridDbCloudClient gridDbCloudClient; private static final String BOOKSTBLNAME = “Books”; public BookContainer(GridDbCloudClient gridDbCloudClient) { this.gridDbCloudClient = gridDbCloudClient; } public void createTableBooks() { List columns = List.of(new GridDbColumn(“id”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“title”, “STRING”), new GridDbColumn(“authors”, “STRING”), new GridDbColumn(“publisher”, “STRING”), new GridDbColumn(“rating”, “DOUBLE”), new GridDbColumn(“genres”, “STRING”), new GridDbColumn(“summary”, “STRING”), new GridDbColumn(“goodreadsBookId”, “LONG”), new GridDbColumn(“goodreadsUrl”, “STRING”)); GridDbContainerDefinition containerDefinition = GridDbContainerDefinition.createContainer(BOOKSTBLNAME, columns); this.gridDbCloudClient.createContainer(containerDefinition); } } `GridDbCloudClient`: to communicate with the GridDB Cloud database. Provided automatically by Spring Boot through constructor-based dependency injection. `BOOKSTBLNAME`: table name as a constant. `createTableBooks()`: create a `Books` table (`collection`) in GridDB Cloud database to hold records for our books. This method started by creating a `List` of `GridDBColumn` objects. Each object describes one column with data type. Then create the container definition and use the `GridDbCloudClient` to actually create the table. public class GridDbCloudClient { private final RestClient restClient; public GridDbCloudClient(String baseUrl, String authToken) { this.restClient = RestClient.builder().baseUrl(baseUrl).defaultHeader(“Authorization”, “Basic ” + authToken) .defaultHeader(“Content-Type”, “application/json”).defaultHeader(“Accept”, “application/json”) .build(); } public void createContainer(GridDbContainerDefinition containerDefinition) { try { restClient.post().uri(“/containers”).body(containerDefinition).retrieve().toBodilessEntity(); } catch (Exception e) { throw new GridDbException(“Failed to create container”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } } The `GridDbCloudClient` class is a Java client for interacting with the GridDB Cloud Web API. It provides methods for creating containers, registering rows, acquiring rows, and executing POST requests. We set `RestClient` up with a base URL and a basic authorization header so that it could connect to the GridDB Cloud Web API and authenticate. //BookContainer.java public void saveBooks(List books) { StringBuilder sb = new StringBuilder(); sb.append(“[“); for (int i = 0; i < books.size(); i++) { Book book = books.get(i); sb.append("["); sb.append("\"").append(book.id()).append("\""); sb.append(", "); sb.append("\"").append(book.title()).append("\""); sb.append(", "); sb.append("\"").append(book.authors()).append("\""); sb.append(", "); sb.append("\"").append(book.publisher()).append("\""); sb.append(", "); sb.append(book.rating()); sb.append(", "); sb.append("\"").append(book.genres() != null ? book.genres() : "").append("\""); sb.append(", "); sb.append("\"").append(book.summary() != null ? book.summary() : "").append("\""); sb.append(", "); sb.append(book.goodreadsBookId()); sb.append(", "); sb.append("\"").append(book.goodreadsUrl() != null ? book.goodreadsUrl() : "").append("\""); sb.append("]"); if (i < books.size() - 1) { sb.append(", "); } } sb.append("]"); String result = sb.toString(); this.gridDbCloudClient.registerRows(BOOKSTBLNAME, result); } `saveBooks(List books)`: designed to receieve a List of Book objects and save them into the `Book` collection created earlier. First, format the list into a specialized string representation of an array of arrays, and then uses the GridDB client to send this formatted string to save all the book records in the database. An illustration of the request body used to register rows into a container: [ ["abf8e412", "The Ultimate Hitchhiker's Guide to the Galaxy", "Douglas Adams", "Del Rey Books", 4.37, "", "", 13], ["5f8bdef1", "The Lost Continent: Travels in Small Town America", "Bill Bryson", "William Morrow Paperbacks", 3.83, "", "", 26] ] Getting books from GridDB Cloud Cloud. //BookContainer.java public List getBooks() { AcquireRowsRequest requestBody = AcquireRowsRequest.builder().limit(50L).build(); AcquireRowsResponse response = this.gridDbCloudClient.acquireRows(BOOKSTBLNAME, requestBody); if (response == null || response.getRows() == null) { return List.of(); } List books = convertResponseToBook(response); return books; } Build the Request body by creating a `AcquireRowsRequest` object, tells GridDB how many rows I want to get. Use the `gridDbCloudClient.acquireRows(BOOKSTBLNAME, requestBody)` to send request to the API. Check if the response is null or if there are no rows in the response. If everything is okay, convert the database response into a list of `Book` object and return it. Converting raw data to `Book` objects. //BookContainer.java private List convertResponseToBook(AcquireRowsResponse response) { List books = response.getRows().stream().map(row -> { try { var book = new Book(row.get(0).toString(), row.get(1).toString(), row.get(2).toString(), row.get(3).toString(), Optional.ofNullable(row.get(4)).map(Object::toString).map(Double::valueOf).orElse(null), Optional.ofNullable(row.get(5)).map(Object::toString).orElse(null), Optional.ofNullable(row.get(6)).map(Object::toString).orElse(null), Optional.ofNullable(row.get(7)).map(Object::toString).map(Long::valueOf).orElse(null), Optional.ofNullable(row.get(8)).map(Object::toString).orElse(null)); return book; } catch (Exception e) { return null; } }).filter(book -> book != null).toList(); return books; } Using Java Streams to process each row from the database. For each row, create a new `Book` object. The database returns data as a list where each position represents a different field. Check for null values and handle errors gracefully. CSV Parser for Goodreads Book In this project we are going to use Goodreads book dataset. Goodreads is the world’s largest site for readers and book recommendations. The book sample in CSV: Id,Name,RatingDist1,pagesNumber,RatingDist4,RatingDistTotal,PublishMonth,PublishDay,Publisher,CountsOfReview,PublishYear,Language,Authors,Rating,RatingDist2,RatingDist5,ISBN,RatingDist3 1339,”Loving and Dying: A Reading of Plato’s Phaedo, Symposium, and Phaedrus”,1:0,288,4:2,total:5,12,11,University Press of America,0,2001,,Richard Gotshalk,4.6,2:0,5:3,0761820728,3:0 Why creating a custom Parser? > Because this CSV data contain commas within the actual data. //GoodReadBookCSVParser.java try (BufferedReader reader = new BufferedReader(new InputStreamReader(is, StandardCharsets.UTF_8))) { String line; boolean isFirst = true; while ((line = reader.readLine()) != null) { if (isFirst) { isFirst = false; continue; } String[] fields = parseCsvLine(line); //assign parser result into each field Book book = new Book(null, title, authors, publisher, rating, null, null, goodreadsBookId); books.add(book); } } The parser receive `InputStream` and wrap it into `BufferedReader`, buffering characters to efficiently read characters, arrays, and lines. We use a `Try-with-resource` block to ensure the File resource gets closed properly, even if something goes wrong. Also specify UTF-8 encoding to handle special characters correctly. //GoodReadBookCSVParser.java private String[] parseCsvLine(String line) { List result = new ArrayList(); boolean inQuotes = false; StringBuilder sb = new StringBuilder(); for (int i = 0; i < line.length(); i++) { char c = line.charAt(i); if (c == '"') { inQuotes = !inQuotes; } else if (c == ',' && !inQuotes) { result.add(sb.toString()); sb.setLength(0); } else { sb.append(c); } } result.add(sb.toString()); return result.toArray(new String[0]); } `parseCsvLine()`: goes through each character one by one. Tracking whether currently inside a quoted field. When there is a comma, only treat it as a field separator if NOT inside quotes. Creating Vaadin views and layouts Because this project was generated using Vaadin Start, we got a fully functional application that can be easily extended and customized. So, we will add new view by following the existing structure. $ com.company $ ├───base $ │ └───ui $ │ ├───component $ │ └───view $ └───bookinventory $ ├───domain $ ├───seeder $ ├───service $ └───ui $ └───view $ BookDetailView.java $ BookListView.java Now, let's add `BookListView` component to display all books. //BookListView.java @Route("book-list") @PageTitle("Book List") @Menu(order = 0, icon = "vaadin:book", title = "Book List") public class BookListView extends Main { private final Logger log = LoggerFactory.getLogger(getClass()); private final BookService bookService; private final Grid bookGrid; } `@Route("book-list")` : defined a Flow view that can be accessed at `htt://localhost:8080/book-list`. `@Menu`: make a Flow view appear in the menu. Use `BookService` to get list of books and to save uploaded books from CSV Vaadin `Grid` to display tabular data of books. A component for uploading a CSV file as shown at BookListView.java Next, adding `BookDetailView`. From the book list, we click one of the book titles, then navigate to the book detail. @Route("book-detail") @PageTitle("Book Detail") public class BookDetailView extends VerticalLayout implements HasUrlParameter { private final BookService bookService; private FormLayout content; private String bookId; private Button fetchGenreBtn; private ProgressBar progressBar; private NativeLabel progresLabel; private Button fetchSummaryBtn; public BookDetailView(BookService bookService) { this.bookService = bookService; fetchGenreBtn.addClickListener(e -> { var ui = UI.getCurrent(); Book book = getCurrentBook(); progresLabel.setVisible(true); progresLabel.setText(“Asking AI for ” + book.title() + “…”); progressBar.setVisible(true); progressBar.setIndeterminate(true); bookService.asyncGenerateGenre(book.id(), ui.accessLater(this::onJobCompleted, null), ui.accessLater(progressBar::setValue, null), ui.accessLater(this::onJobFailed, null)); }); } } Extends `VerticalLayout`, so all components added will be shown vertically. Implements `HasUrlParameter`, so the page can receive a book ID from the URL. We use `BookService` to fetch the book from database, call the AI assistant, and update the book data. The `content` form layout will show book details. When the `fetchGenreBtn` button clicked, we show a progress bar while the `bookService` starting the AI task. Spring AI Integration Our book data is ready in the database and we can access it through the listing and details page. Now it is time to enhance the functionality by integrating AI. We’re going to dive into how we can use AI to automatically figure out the genre and write a concise summary for each book. Before we dive into the “how”, let’s talk about an important part: Large Language Models (LLM). These are powerful AI systems (like OpenAI’s GPT models) that can understand and create human-like text. They let us “ask” for a book’s genre or summary. Now, we need to connect to these AI models. This is where Spring AI comes in as our best friend. Spring AI simplifies how we work with LLMs. We don’t have to send raw HTTP requests to OpenaI. Spring AI handles all that heavy lifting. It provides a consistent way to interact with different LLM provider. So, whether we’re using OpenAI today or decide to switch to Google Gemini tomorrow, our code for making AI calls remains largerly the same. Include Spring AI dependency in our `pom.xml`: 1.0.0-SNAPSHOT org.springframework.ai spring-ai-starter-model-openai ${spring-ai.version} Configure API Key: # The OpenAI API key to use spring.ai.openai.api-key=${OPENAIAPIKEY} The default OpenAI model to use spring.ai.openai.model=${OPENAI_MODEL: gpt-4o-mini} Configure `ChatClient`: @Configuration public class ChatClientConfig { @Bean public ChatClient chatClient(ChatClient.Builder chatClientBuilder) { return chatClientBuilder.build(); } } Create a `BookAssistant.java` for sending book data into LLM and return the response. This class called by `BookService`. public BookAIReply findBookGenre(String title, String authors) { BookAIReply reply = chatClient.prompt() .user(user -> user.text(“What is the genre of the book {title} by {authors}. Provide the source url.”) .param(“title”, title).param(“authors”, authors)) .call().entity(BookAIReply.class); return reply; } public BookAIReply findBookSummary(String title, String authors) { BookAIReply reply = chatClient.prompt() .user(user -> user.text(“What is the summary of the book {title} by {authors}. Provide the source url.”) .param(“title”, title).param(“authors”, authors)) .call().entity(BookAIReply.class); return reply; } We use zero-shot prompting techniques. The zero-shot prompt directly instructs the model to perform a task without any additional examples to steer it. `chatClient.prompt()` Starts building a prompt for the AI model. `.user(user -> user.text(…).param(…))` – Defines the user message for the prompt. – User Role: Represents the user’s input – their questions, commands, or statements to the AI. – `user.text(…)` sets the message template, using placeholders (`{title}`, `{authors}`). – `.param(“title”, title)` and `.param(“authors”, authors)` inject the actual values into the template. `.call()` Executes the prompt, sending it to the AI model. `.entity(BookAIReply.class)` Maps the AI’s response to a `BookAIReply` Java object, making it easy to work with structured data. This method chain provides a fluent, type-safe way to interact with AI models using Spring AI, abstracting away the complexity of prompt construction and response parsing. The same pattern is used for other methods. And finally wire in the `BookAssistant` component into the `BookService`. BookAIReply reply = bookAssistant.findBookSummary(book.title(), book.authors()); String summary = reply.value(); String sourceUrl = reply.sourceUrl(); You can find the code for this article on Github and run the application from the command line with Maven. GridDB Cloud query output. Summary In this article, we have built a web application without touching a single line of JavaScript or HTML. We achieved this by combining Vaadin with Spring Boot. What’s more, we make it even smarter by integrating Spring AI, giving it intelligent capabilities. Future enhancements: Adding data filtering Use pagination or lazy loading. Evaluate generative AI output. Enable natural language queries for book searches with semantic search. Add a voice assistant that responds to user

Hackathon Gallery Introduction Over the last few months of 2025, GridDB held what is known as a hackathon, to really highlight the versatility and productiveness of GridDB Cloud. The prompt of the event was simple: use the power of GridDB Cloud to build any sort of app you want; the webpage for the event made mentions of IoT, but really the prompt was open and users could submit ideas based on any personal interests or expertise. The event (officially titled as the GridDB IoT Hackathon) had two distinct phases: the aforementioned online phase, where teams of 2-5 could submit their ideas with no coding necessary, just a basic blueprint of how they would plan to implement their idea. The next phase of the event would be an all-out in-person event hosted in Bengaluru, India for the top 5 teams, decided by the judging panel. And as exciting as the in-person event was, we will save that portion for another day. For today’s article, we will focus on the online portion of the event. And now, for some numbers. We had over 250 participants sign up for the event through the online portal. From there, we had 28 teams submit their ideas. We had ideas ranging from health, to finance, to on-the-field sensors for a variety of different purposes. All-in-all, we were very impressed and flattered at the breadth and range of project ideas submitted by the wonderful GridDB community. Due to technical issues, we lost access to the original hackathon portal, but we have re-created the gallery for all to see here: Hackathon Gallery. Please be on the lookout for the next article where we will showcase the 5 submissions which graduated on to the finalist round where we hosted an event in Bengaluru,

Introduction As global populations grow, agriculture faces mounting pressure to produce more food sustainably. Precision agriculture, powered by IoT and real-time data analytics, offers a solution by optimizing crop management through actionable insights. However, traditional relational databases struggle with the velocity and volume of agricultural time-series data, creating performance bottlenecks when farmers need immediate analysis of crop conditions, weather patterns, and soil metrics. GridDB’s specialized time-series architecture addresses these challenges through efficient storage, high-speed ingestion, and optimized query performance for temporal data patterns. Its ability to handle mixed-frequency sensor data—from hourly weather readings to daily satellite imagery—makes it particularly well-suited for agricultural monitoring systems. This article explores a simple Spring Boot application that leverages GridDB Cloud to monitor crop health predicatively. Our implementation integrates real-time weather data from NASA’s POWER API, stores environmental time-series in GridDB, and exposes analytics through REST APIs for dashboard visualization. Time-Series Database Requirements for Agricultural IoT An ideal time-series database platform must efficiently ingest, store, and analyze data from diverse sources to enable data-driven decisions in crop health monitoring, irrigation management, and yield prediction. Meeting these demands involves addressing several critical data characteristics and infrastructure requirements, such as: High Cardinality Numerous unique time series generated by varied sensor types, farm locations, and devices. Multi-Frequency Data Streams Environmental sensors: Updates every 15 minutes. Weather APIs: Typically provide hourly updates. Satellite imagery: Collected daily or at intervals of 1 to 7 days. Sensor Metrics Each sensor point typically captures four distinct metrics (e.g., temperature, humidity, soil moisture, and light intensity). Ingestion Volume Per farm: 100 sensors × 4 metrics × 96 readings/day = ~38,400 records daily. Across 500+ farms: 500 farms × ~38.4K records = ~19 million inserts per day. Timestamp Precision Data is recorded with microsecond-level precision to support fine-grained temporal analysis. Query and Storage Requirements Optimized for frequent time-based queries (e.g., daily or weekly trends). Requires efficient data compression and long-term retention strategies to manage continuous high-volume ingestion. These patterns demand a robust time-series database like GridDB, designed to handle high-ingest rates, granular timestamps, and complex queries, all essential for scalable agricultural IoT solutions. Project Overview: Building the Predictive Crop Health Monitoring Farmers In this project, we focus on real-time crop health monitoring by processing environmental data and displaying important insights through visual dashboards, built with Spring Boot, GridDB Cloud, Thymeleaf, and Chart.js. System Workflow Collection Fetches environmental metrics from NASA’s POWER API. Storage Stores data in GridDB using a time-series model (CropHealthData container). Analysis Processes raw data into actionable stress indicators using CropHealthService Visualization Displays daily, weekly, and monthly trends on a web-based dashboard. Setting Up GridDB Cluster and Spring Boot Integration Project Structure Here’s a suggested project structure for this application: ├── pom.xml ├── src │ ├── main │ │ ├── java │ │ │ └── mycode │ │ │ ├── controller │ │ │ │ └── CropHealthController.java │ │ │ ├── dto │ │ │ │ └── CropHealthData.java │ │ │ ├── MySpringBootApplication.java │ │ │ └── service │ │ │ ├── CollectionService.java │ │ │ ├── CropHealthService.java │ │ │ └── RestTemplateConfig.java │ │ └── resources │ │ ├── application.properties │ │ └── templates │ │ └── dashboard.html This structure separates controllers, models, repositories, services, and the application entry point into distinct layers, enhancing modularity and maintainability. It can be further customized based on individual requirements. Set Up GridDB Cloud For this exercise, we will be using GridDB Cloud vesion. Start by visiting the GridDB Cloud portal and signing up for an account. Based on requirements, either the free plan or a paid plan can be selected for broader access. After registration, an email will be sent containing essential details, including the Web API URL and login credentials. Once the login details are received, log in to the Management GUI to access the cloud instance. Create Database Credentials Before interacting with the database, we must create a database user: Navigate to Security Settings: In the Management GUI, go to the “GridDB Users” tab. Create a Database User: Click “Create Database User,” enter a username and password, and save the credentials. For example, set the username as soccer_admin and a strong password. Store Credentials Securely: These will be used in your application to authenticate with GridDB Cloud. Set Allowed IP Addresses To restrict access to authorized sources, configure the allowed IP settings: Navigate to Security Settings: In the Management GUI, go to the “Network Access” tab and locate the “Allowed IP” section and add the . Add IP Addresses: For development, you can temporarily add your local machine’s IP. Add POM Dependency Here’s an example of how to configure the dependency in the pom.xml file: <?xml version=”1.0″ encoding=”UTF-8″?> <project xmlns=”http://maven.apache.org/POM/4.0.0″ xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd”> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>my-griddb-app</artifactId> <version>1.0-SNAPSHOT</version> <name>my-griddb-app</name> <description>GridDB Application with Spring Boot</description> <url>http://maven.apache.org</url> <parent> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-parent</artifactId> <version>3.2.4</version> <relativePath /> <!– lookup parent from repository –> </parent> <properties> <maven.compiler.source>17</maven.compiler.source> <maven.compiler.target>17</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <jackson.version>2.16.1</jackson.version> <lombok.version>1.18.30</lombok.version> <springdoc.version>2.3.0</springdoc.version> <jersey.version>3.1.3</jersey.version> <httpclient.version>4.5.14</httpclient.version> </properties> <dependencies> <!– Spring Boot Starters –> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-thymeleaf</artifactId> </dependency> <!– Testing –> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <!– API Documentation –> <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-starter-webmvc-ui</artifactId> <version>${springdoc.version}</version> </dependency> <!– JSON Processing –> <dependency> <groupId>org.glassfish.jersey.core</groupId> <artifactId>jersey-client</artifactId> <version>${jersey.version}</version> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20231013</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>${jackson.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>${jackson.version}</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>${jackson.version}</version> </dependency> <!– HTTP Client –> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>${httpclient.version}</version> </dependency> <!– Development Tools –> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> <optional>true</optional> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> <configuration> <excludes> <exclude> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </exclude> </excludes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.11.0</version> <configuration> <source>${maven.compiler.source}</source> <target>${maven.compiler.target}</target> <annotationProcessorPaths> <path> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>${lombok.version}</version> </path> </annotationProcessorPaths> </configuration> </plugin> </plugins> </build> </project> The application.properties file stores configuration settings like the GridDB Cloud API URL and key, and the NASA POWER API URL, enabling the app to connect securely to these external services. griddb.rest.url=https://your-griddb-cloud-url/rest griddb.api.key=your-griddb-api-key nasa.power.api.url=https://power.larc.nasa.gov/api/temporal/daily/point Technical Implementation In the following section, we’ll walk through the key steps required to set up the project. Container Setup in GridDB Cloud A container named CropHealthData is created in GridDB Cloud, defined as a time-series type, with timestamp set as the row key. Next, we define the schema, which includes the following columns: Data Collection: CollectionService The CollectionService handles weather data ingestion by acting as the interface between external data sources and the GridDB backend. It integrates with NASA’s POWER API to retrieve daily environmental metrics crucial for monitoring crop health. Weather Metrics In this section, we retrieve high-precision, real-time data from an external API. The service provides access to various environmental parameters through the following endpoint: https://power.larc.nasa.gov/api/temporal/daily/pointparameters=ALLSKY_SFC_SW_DWN,T2MDEW,WS2M,PS,QV2M&community=AG&longitude=-93.5&latitude=42.0&start=%s&end=%s&format=JSON API Reference: NASA POWER API Documentation Data is retrieved for a fixed geographical location (latitude: 42.0, longitude: -93.5) via a GET request to NASA’s temporal endpoint. The parameters fetched include: ALLSKY_SFC_SW_DWN: Solar Radiation (MJ/m²/day) T2MDEW: Dew Point (°C) WS2M: Wind Speed (m/s) PS: Surface Pressure (kPa) QV2M: Specific Humidity (g/kg) After receiving the JSON response: Relevant fields are extracted. Timestamps are formatted to GridDB’s required pattern: yyyy-MM-dd HH:mm:ss. Invalid or missing values (represented as -999) are filtered out to ensure data quality. Here is the implementation of CollectionService.java file. @Service public class CollectionService { @Value(“${nasa.power.api.url}”) private String nasaApiUrl; private static String gridDBRestUrl; private static String gridDBApiKey; @Value(“${griddb.rest.url}”) public void setGridDBRestUrl(String in) { gridDBRestUrl = in; } @Value(“${griddb.api.key}”) public void setGridDBApiKey(String in) { gridDBApiKey = in; } public void fetchAndStoreData(String startDate, String endDate) { try { // Fetch JSON data from NASA POWER API String jsonData = fetchJSONFromNASA(String.format( “%s?parameters=ALLSKY_SFC_SW_DWN,T2MDEW,WS2M,PS,QV2M&community=AG&longitude=-93.5&latitude=42.0&start=%s&end=%s&format=JSON”, nasaApiUrl, “20250101”, “20250514”)); // Process and send data to GridDB Cloud sendBatchToGridDB(jsonData); } catch (Exception e) { throw new RuntimeException(“Failed to fetch and store data”, e); } } private String fetchJSONFromNASA(String urlString) throws Exception { URL url = new URL(urlString); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestMethod(“GET”); conn.setRequestProperty(“Accept”, “application/json”); if (conn.getResponseCode() != 200) { throw new RuntimeException(“Failed to fetch data: HTTP error code : ” + conn.getResponseCode()); } Scanner scanner = new Scanner(url.openStream()); StringBuilder response = new StringBuilder(); while (scanner.hasNext()) { response.append(scanner.nextLine()); } scanner.close(); conn.disconnect(); return response.toString(); } private void sendBatchToGridDB(String jsonData) throws Exception { JSONArray batchData = new JSONArray(); ObjectMapper mapper = new ObjectMapper(); JsonNode root = mapper.readTree(jsonData); JsonNode data = root.path(“properties”).path(“parameter”); JsonNode allSkyNode = data.path(“ALLSKY_SFC_SW_DWN”); // Iterate over the field names (dates) in ALLSKY_SFC_SW_DWN Iterator<String> dateIterator = allSkyNode.fieldNames(); while (dateIterator.hasNext()) { String dateStr = dateIterator.next(); double solarRadiation = allSkyNode.path(dateStr).asDouble(); double dewPoint = data.path(“T2MDEW”).path(dateStr).asDouble(); double windSpeed = data.path(“WS2M”).path(dateStr).asDouble(); double surfacePressure = data.path(“PS”).path(dateStr).asDouble(); double specificHumidity = data.path(“QV2M”).path(dateStr).asDouble(); // Skip records with -999 (missing data) if (solarRadiation == -999 || dewPoint == -999 || windSpeed == -999 || surfacePressure == -999 || specificHumidity == -999) { continue; } JSONArray rowArray = new JSONArray(); rowArray.put(formatTimestamp(dateStr)); rowArray.put(42.0); // latitude rowArray.put(-93.5); // longitude rowArray.put(solarRadiation); rowArray.put(dewPoint); rowArray.put(windSpeed); rowArray.put(surfacePressure); rowArray.put(specificHumidity); batchData.put(rowArray); } if (batchData.length() > 0) { sendPutRequest(batchData); } else { System.out.println(“No valid data to send to GridDB.”); } } private String formatTimestamp(String inputTimestamp) { try { if (inputTimestamp == null || inputTimestamp.isEmpty()) { return LocalDateTime.now().format(DateTimeFormatter.ofPattern(“yyyy-MM-dd’T’HH:mm:ss”)) + “Z”; } SimpleDateFormat sdf = new SimpleDateFormat(“yyyyMMdd”); SimpleDateFormat outputFormat = new SimpleDateFormat(“yyyy-MM-dd’T’HH:mm:ss’Z'”); return outputFormat.format(sdf.parse(inputTimestamp)); } catch (Exception e) { return LocalDateTime.now().format(DateTimeFormatter.ofPattern(“yyyy-MM-dd’T’HH:mm:ss”)) + “Z”; } } private void sendPutRequest(JSONArray batchData) throws Exception { URL url = new URL(gridDBRestUrl); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setDoOutput(true); conn.setRequestMethod(“PUT”); conn.setRequestProperty(“Content-Type”, “application/json”); conn.setRequestProperty(“Authorization”, gridDBApiKey); // Send JSON Data try (var os = conn.getOutputStream()) { os.write(batchData.toString().getBytes()); os.flush(); } int responseCode = conn.getResponseCode(); if (responseCode == HttpURLConnection.HTTP_OK || responseCode == HttpURLConnection.HTTP_CREATED) { System.out.println(“Batch inserted successfully.”); } else { throw new RuntimeException(“Failed to insert batch. Response: ” + responseCode); } conn.disconnect(); } } Batch Insertion to GridDB Next, cleaned data is sent to the CropHealthData container in GridDB using a PUT request via the /CropHealthData/row endpoint. This process: Utilizes batch insertion to reduce network overhead. Takes advantage of GridDB’s high-throughput performance, which is ideal for time-series ingestion at scale. Reference: GridDB Performance Overview Data Retrieval and Analytical Processing The CropHealthService is responsible for retrieving data from GridDB and performing analytical computations to derive actionable indicators. It queries the CropHealthData container using GridDB’s REST API, which returns a JSON response with a rows array. Each row contains eight fields, with timestamps formatted as yyyy-MM-dd’T’HH:mm:ss.SSSSSSSSSZ (e.g., 2025-01-01T00:00:00.000000000Z). The service parses this response, mapping each row to a CropHealthData DTO object. package mycode.dto; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import java.util.Date; @Data @AllArgsConstructor @NoArgsConstructor public class CropHealthData { private Date timestamp; private double latitude; private double longitude; private double solarRadiation; private double dewPoint; private double windSpeed; private double surfacePressure; private double specificHumidity; private String status; } This object is then used to compute five key indicators: Photosynthetic Stress: Identifies days with solar radiation below 10 MJ/m²/day, aggregated by month to highlight periods of reduced photosynthesis. Fungal Risk: Counts days with dew point above 20°C within a 7-day window, signaling potential fungal disease outbreaks. Wind Stress: Tracks days with wind speed exceeding 5 m/s, aggregated weekly to assess mechanical stress on crops. Atmospheric Stress: Detects daily surface pressure drops greater than 2 kPa, indicating weather instability. Moisture Deficit: Calculates weekly average specific humidity to evaluate water availability. These indicators transform raw weather data into insights tailored for agriculture. For example, a high fungal risk score prompts farmers to apply fungicides, while persistent low radiation signals the need for supplemental lighting. Visualization: Unified Dashboard API Visualization is a key feature of the system, delivered through a Thymeleaf-based dashboard powered by Chart.js for dynamic, interactive charts. The CropHealthController exposes a single API endpoint: GET /api/dashboard-data This endpoint calls CropHealthService.getAllVisualizationData to retrieve data for all ten charts in one JSON response. Here is the complete implementation from CropHealthController.java file. @Controller @RequestMapping(“/crop-health”) public class CropHealthController { @Autowired private CropHealthService cropHealthService; @GetMapping(“/dashboard”) public String dashboard(Model model) { model.addAttribute(“solarRadiationData”, cropHealthService.getVisualizationData(“solar_radiation”)); model.addAttribute(“dewPointData”, cropHealthService.getVisualizationData(“dew_point”)); model.addAttribute(“windSpeedData”, cropHealthService.getVisualizationData(“wind_speed”)); model.addAttribute(“surfacePressureData”, cropHealthService.getVisualizationData(“surface_pressure”)); model.addAttribute(“specificHumidityData”, cropHealthService.getVisualizationData(“specific_humidity”)); model.addAttribute(“photosyntheticStressData”, cropHealthService.getVisualizationData(“photosynthetic_stress”)); model.addAttribute(“fungalRiskData”, cropHealthService.getVisualizationData(“fungal_risk”)); model.addAttribute(“windStressData”, cropHealthService.getVisualizationData(“wind_stress”)); model.addAttribute(“atmosphericStressData”, cropHealthService.getVisualizationData(“atmospheric_stress”)); model.addAttribute(“moistureDeficitData”, cropHealthService.getVisualizationData(“moisture_deficit”)); return “dashboard”; } } Running the Project To run the project, execute the following command to build and run our application: mvn clean install && mvn spring-boot:run Accessing the Dashboard After successfully launching the Spring Boot application, users can access the interactive visualization dashboard by opening a web browser and navigating to: `http://localhost:9090/dashboard`. The dashboard currently showcases the following visual insights: The dashboard presents a comprehensive view of crop health through ten interactive charts: Environmental Metrics (Line Charts) Solar Radiation: Daily solar radiation values in MJ/m²/day, helping identify optimal photosynthesis periods Dew Point: Temperature at which air becomes saturated (°C), crucial for fungal disease prediction Wind Speed: Daily wind measurements in m/s, indicating potential mechanical stress on crops Surface Pressure: Atmospheric pressure readings in kPa, showing weather stability Specific Humidity: Daily moisture content in g/kg, essential for irrigation planning Crop Stress Indicators Photosynthetic Stress: Monthly bar chart showing days with suboptimal radiation (< 10 MJ/m²/day) Fungal Risk: Gauge chart displaying 7-day dew point risk assessment (> 20°C) Wind Stress: Weekly bar chart tracking high wind events (> 5 m/s) Atmospheric Stress: Area chart highlighting significant pressure drops (> 2 kPa/day) Moisture Deficit: Weekly line chart of humidity averages for water management Conclusion: Precision farming relies on fast, accurate time-series data to make informed decisions that improve crop health and yield. By harnessing GridDB’s ability to handle large-scale, high-frequency environmental data in real time, farmers can detect stress factors early and respond proactively. This timely insight reduces waste, optimizes resource use, and ultimately leads to more sustainable and efficient agricultural practices. Fast, reliable time-series databases are essential for unlocking the full potential of precision agriculture in today’s data-driven

With GridDB Cloud 3.1, you can now access the native API of GridDB through Azure’s virtual peering network connection. The way it works is that that any virtual network (vnet) that you set up in your Azure cloud environment can set up what is called a peering connection, which allows two disparate sources to communicate through Azure’s vast resources. Through this, any virtual machine connected to that vnet can communicate and use the GridDB Cloud native APIs. We discuss at greater length here: https://griddb.net/en/blog/griddb-cloud-v3-1-how-to-use-the-native-apis-with-azures-vnet-peering/ In this article, we will build upon that idea and teach you how to set up a VPN which will allow you to access your GridDB Cloud through your local environment, meaning you can freely use GridDB with your existing application code as long as you connect to the VPN. Prereqs To fully utilize GridDB Cloud with native APIs in your local environment, you will need to, of course, have access to one of the paid GridDB Cloud instances: https://griddb.net/en/blog/griddb-cloud-azure-marketplace/. The nice thing, though, is that there are trial versions on the marketplace of one month so that you may try out GridDB Cloud’s features for free! You will also need to have set up a the vnet peering as described in the opening paragraphs of this article: GridDB Cloud v3.1 – How to Use the Native APIs with Azure’s VNET Peering If you have this set up, you should have the following in your Azure resource: GridDB Cloud (Pay As You Go) Azure Virtual Network with peering connection to GridDB Cloud A virtual machine connected to the above vnet Please note, that all of the above will incur some sort of cost on Azure (for example, an Azure VM b1 instance costs roughly ~$8/month if left on at all times). OpenVPN and IP Masquerading The way this set up works is through something called IP Masquerading which is “a process where one computer acts as an IP gateway for a network. All computers on the network send their IP packets through the gateway, which replaces the source IP address with its own address and then forwards it to the internet.” (https://www.linux.com/training-tutorials/what-ip-masquerading-and-when-it-use/). Essentially, it means that the traffic from your local machine will be intended for the GridDB Cloud IP, but instead will route through the VPN the DB will see the request coming and it will look like the request is coming from the local machine within the network (the vm) and accept it, and then make its response and push it back through the virtual network, through the virtual machine, and to your local env. So to get this running, you simply need to set up openvpn on the Azure virtual machine and then turn on the rule to do IP Masquerading and it will work. Install OpenVPN To install openvpn and the client certs for my machine, I used the guide from ubuntu: https://documentation.ubuntu.com/server/how-to/security/install-openvpn/. Through this guide, you will have OpenVPN installed on your Azure VM and then will have certs on your local machine that can connect to your VM. 1. Install OpenVPN & Easy-RSA sudo apt install openvpn easy-rsa 2. Set Up the PKI (Certificate Authority) sudo make-cadir /etc/openvpn/easy-rsa cd /etc/openvpn/easy-rsa/ Initialize PKI: ./easyrsa init-pki Build the CA: ./easyrsa build-ca 3. Generate Server Certificates Generate server key request: ./easyrsa gen-req myservername nopass Generate Diffie-Hellman params: ./easyrsa gen-dh Sign server certificate: ./easyrsa sign-req server myservername Copy required files into /etc/openvpn/: pki/dh.pem pki/ca.crt pki/issued/myservername.crt pki/private/myservername.key 4. Create Client Certificates Generate client key request: ./easyrsa gen-req myclient1 nopass Sign client cert: ./easyrsa sign-req client myclient1 Securely copy to the client machine: ca.crt (from earlier) myclient1.crt (inside /pki/issued) myclient1.key (inside /pki/private) 5. Configure the OpenVPN Server Copy sample config: sudo cp /usr/share/doc/openvpn/examples/sample-config-files/server.conf /etc/openvpn/myserver.conf Edit myserver.conf so these lines reference your certs: ca ca.crt cert myservername.crt key myservername.key dh dh.pem Generate TLS auth key: sudo openvpn –genkey secret ta.key Enable IP forwarding: Edit /etc/sysctl.conf, set: net.ipv4.ip_forward=1 Apply: sudo sysctl -p /etc/sysctl.conf Start the server: sudo systemctl start openvpn@myserver 6. Configure the Client Install OpenVPN: sudo apt install openvpn Copy sample config: sudo cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/ Place the files on client: ca.crt myclient1.crt myclient1.key ta.key Edit client.conf: client remote your.server.ip 1194 ca ca.crt cert myclient1.crt key myclient1.key tls-auth ta.key 1 Start client: sudo systemctl start openvpn@client 7. Quick Troubleshooting Check logs: sudo journalctl -u openvpn@myserver -xe sudo journalctl -u openvpn@client -xe Ensure: Ports match Protocol (udp/tcp) matches tls-auth index matches (0 on server, 1 on client) Same cipher, auth, and dev tun settings IP Masquerading As explained above, if you try it now, it simply won’t work, as the traffic will be routed to the GridDB DB from the IP on your local environment which is blocked due to security rules. But once this setting is turned on, it will work. Run the following command in your VM: sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE. And that should do it! To ensure it works, you can of course run the sample code based on the previous blog. But before going through that effort, you can also simply try this: from the local environment (connected to the VPN), ping the IP of your GridDB Cloud DB (can be fetched from the notification provider URL in the GridDB Cloud DB UI home page) ping 172.26.30.68. And then on your Azure VM (the one hosting the VPN and that can also connect to GridDB Cloud) run: sudo tcpdump -i eth0 -n host 172.26.30.68. If successful, your pings to GridDB Cloud should be routed through the VM and be heading to its destination. Cool! To run the sample code, you can start by cloning the github repo and changing to the correct branch: $ git clone https://github.com/griddbnet/Blogs.git –branch griddb_cloud_paid_guide Then set your env variables for your GridDB Connection: export GRIDDB_NOTIFICATION_PROVIDER=”” export GRIDDB_CLUSTER_NAME=”” export GRIDDB_USERNAME=”” export GRIDDB_PASSWORD=”” export GRIDDB_DATABASE=”” And then from here, navigate to either the java or python dirs and run them! For java, do – $ mvn clean package – $ java -jar target/java-samples-1.0-SNAPSHOT-jar-with-dependencies.jar For python, after installing the python client, you can install the requirements text (python3.12 -m pip install -r requirements.txt), make sure your JAVA_HOME and CLASSPATH env variables are set, and then run the code python3.12

Welcome! We’re about to build something useful, a volunteer-matching platform that connects skilled medical professionals with health organizations that need them. It’s the kind of system you’d see powering real health events, from blood drives to vaccination clinics. By the time we’re done, you’ll understand how to architect and deploy a complete full-stack application that handles real-world complexity, including matching qualified people to opportunities, managing permissions across different user roles, and keeping everything secure. The Stack: Technologies That Work Together We’re using a carefully selected tech stack that mirrors what you’ll find in production environments: Spring Boot & Thymeleaf handles the business rules, data orchestration, and renders dynamic HTML templates on the server side. GridDB (cloud-hosted NoSQL datastore) stores volunteer profiles, opportunities, and applications. Each technology serves a specific purpose, and together they create a seamless user experience backed by robust backend logic. Learning Roadmap We’ll move from foundation to mastery: Setup & Architecture: We’ll start by understanding the three-layer system design, laying out your Maven project structure, and configuring Spring Boot for success. Core Features: Next, we’ll implement the data model (entities, relationships, indexing) and set up GridDB integration. User Interface & Experience: Then we’ll create server-rendered Thymeleaf templates for browsing opportunities, applying for roles, and managing skills. You’ll see how server-side rendering keeps everything simple. Security: We’ll add Spring Security authentication, implement role-based access control, ensuring organizers see different screens than volunteers, and ensuring data stays protected. Real-World Patterns: Finally, integrate real-time slot updates. By completing this tutorial, you’ll understand how to architect a full-stack Java application from database to user interface. More importantly, you’ll have a complete, deployable system you can adapt to other matching problems. Let’s build something real. Project Setup Here’s how we’ll set it up: Navigate to start.spring.io Configure your project: Project: Maven Language: Java Spring Boot: 3.5.x (latest stable version) Group: com.example Artifact: springboot-volunteermatching Java Version: 21 Add the following dependencies: Spring Web Thymeleaf Spring Security Click Generate to download a ZIP file with our project structure Once you’ve downloaded and extracted the project, import it into your IDE. Next, we will create the package structure by grouping the classes based on their respective entities, e.g., a package organization contains the controller, service, DTO, etc. volunteer-matching/ ├── pom.xml ├── src/main/java/com/volunteermatching/ │ ├── config/ (RestClient config) │ ├── griddb/ │ ├── griddbwebapi/ │ ├── opportunity/ │ ├── opportunity_requirement/ │ ├── organization/ │ ├── organization_member/ │ ├── registration/ │ └── security/ (Auth filters, RBAC) │ ├── skill/ │ ├── user/ │ ├── volunteer_skill/ └── src/main/resources/ ├── templates/ (Thymeleaf templates) └── application.properties (Configuration) Connecting to the GridDB Cloud Configure the credentials for connecting to the GridDB Cloud through HTTP. Add the following to application.properties: # GridDB Configuration griddbcloud.base-url=https://cloud5197.griddb.com:443/griddb/v2/gs_cluster griddbcloud.auth-token=TTAxxxxxxx Next, create a bean of org.springframework.web.client.RestClient that provides a fluent, builder-based API for sending synchronous and asynchronous HTTP requests with cleaner syntax and improved readability. @Configuration public class RestClientConfig { final Logger LOGGER = LoggerFactory.getLogger(RestClientConfig.class); @Bean(“GridDbRestClient”) public RestClient gridDbRestClient( @NonNull @Value(“${griddbcloud.base-url}”) final String baseUrl, @NonNull @Value(“${griddbcloud.auth-token}”) final String authToken) { return RestClient.builder() .baseUrl(baseUrl) .defaultHeader(HttpHeaders.AUTHORIZATION, “Basic ” + authToken) .defaultHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE) .defaultHeader(HttpHeaders.ACCEPT, MediaType.APPLICATION_JSON_VALUE) .defaultStatusHandler( status -> status.is4xxClientError() || status.is5xxServerError(), (request, response) -> { String responseBody = getResponseBody(response); LOGGER.error(“GridDB API error: status={} body={}”, response.getStatusCode(), responseBody); if (response.getStatusCode().value() == 403) { LOGGER.error(“Access forbidden – please check your auth token and permissions.”); throw new ForbiddenGridDbConnectionException(“Access forbidden to GridDB Cloud API.”); } throw new GridDbException(“GridDB API error: “, response.getStatusCode(), responseBody); }) .requestInterceptor((request, body, execution) -> { final long begin = System.currentTimeMillis(); ClientHttpResponse response = execution.execute(request, body); logDuration(request, body, begin, response); return response; }) .build(); } } @Bean(“GridDbRestClient”): register this client as a Spring bean so we can inject it anywhere with @Qualifier(“GridDbRestClient”) final RestClient restClient. .baseUrl(baseUrl): set the common base URL for all requests. .defaultHeader(…): adds a header that will be sent with every request. .defaultStatusHandler(…): when the API return an error (4xx or 5xx status code), log the error status. If the status is 403, throws a custom ForbiddenGridDbConnectionException. For any other error, it throws a general GridDbException. .requestInterceptor(: log how long the request took for debugging performance. Next, create a helper that will be used by each service class to talk to the GridDB Cloud over the internet using HTTP requests. It wraps a pre-configured RestClient and provides easy-to-use methods for common database operations. All the complicated stuff (URLs, headers, error handling) is hidden inside this class. @Component public class GridDbClient { private final RestClient restClient; public GridDbClient(@Qualifier(“GridDbRestClient”) final RestClient restClient) { this.restClient = restClient; } public void createContainer(final GridDbContainerDefinition containerDefinition) { try { restClient .post() .uri(“/containers”) .body(containerDefinition) .retrieve() .toBodilessEntity(); } catch (Exception e) { throw new GridDbException(“Failed to create container”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public void registerRows(String containerName, Object body) { try { ResponseEntity result = restClient .put() .uri(“/containers/” + containerName + “/rows”) .body(body) .retrieve() .toEntity(String.class); } catch (Exception e) { throw new GridDbException(“Failed to execute PUT request”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public AcquireRowsResponse acquireRows(String containerName, AcquireRowsRequest requestBody) { try { ResponseEntity responseEntity = restClient .post() .uri(“/containers/” + containerName + “/rows”) .body(requestBody) .retrieve() .toEntity(AcquireRowsResponse.class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute GET request”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public SQLSelectResponse[] select(List sqlStmts) { try { ResponseEntity responseEntity = restClient .post() .uri(“/sql/dml/query”) .body(sqlStmts) .retrieve() .toEntity(SQLSelectResponse[].class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute /sql/dml/query”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public SqlExecutionResult[] executeSqlDDL(List sqlStmts) { try { ResponseEntity responseEntity = restClient.post().uri(“/sql/ddl”).body(sqlStmts).retrieve().toEntity(SqlExecutionResult[].class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute SQL DDL”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } public SQLUpdateResponse[] executeSQLUpdate(List sqlStmts) { try { ResponseEntity responseEntity = restClient .post() .uri(“/sql/dml/update”) .body(sqlStmts) .retrieve() .toEntity(SQLUpdateResponse[].class); return responseEntity.getBody(); } catch (Exception e) { throw new GridDbException(“Failed to execute /sql/dml/update”, HttpStatusCode.valueOf(500), e.getMessage(), e); } } } The constructor takes a RestClient that was named GridDbRestClient. The @Qualifier makes sure we get the correct one. Every method follows the same safe structure: Try to send an HTTP request using restClient. If something goes wrong (network issue, wrong data, server error), catch the exception. Data Model using DTOs Now, let’s create the Data Transfer Objects (DTOs). DTOs are simple classes that carry information from one part of the app to another, for example, from the database to the screen. In this project, the DTOs represent important things like users, skills, organizations, and volunteer events. Each DTO has its own fields to hold the data. Each DTO matches the structure of rows inside one GridDB container. UserDTO: represents a user in the system, such as a volunteer or an organization admin. It’s used to create, update, or display user information. public class UserDTO { @Size(max = 255) @UserIdValid private String id; @NotNull @Size(max = 255) @UserEmailUnique private String email; @NotNull @Size(max = 255) private String fullName; @NotNull private UserRole role; // Setter and Getter } SkillDTO: represents a skill that volunteers can have, such as “First Aid” or “Paramedic.” It’s used to manage the list of available skills. public class SkillDTO { @Size(max = 255) private String id; @NotNull @Size(max = 255) @SkillNameUnique private String name; public SkillDTO() {} public SkillDTO(String id, String name) { this.id = id; this.name = name; } // Setter and Getter } VolunteerSkillDTO: links a user (volunteer) to a specific skill. It includes details like when the skill expires and its verification status. It’s useful for tracking what skills a volunteer has and their validity. OrganizationDTO: represents an organization that creates volunteer opportunities. It’s used to manage organization details. OrganizationMemberDTO: links a user to an organization, specifying their role within it (e.g., member or admin). It’s used to manage who belongs to which organization. OpportunityDTO: represents a volunteer opportunity, like an event that needs volunteers. It’s used to create and display opportunities. OpportunityRequirementDTO: specifies the skills required for a volunteer opportunity. It links an opportunity to skills and indicates if a skill is mandatory. RegistrationDTO: represents a volunteer’s registration for an opportunity. It tracks who signed up and the status of their registration. Service Layer and Business Logic Next, we implement the service layer. The services will utilize these DTOs to handle business logic, communicate with GridDB Cloud through our client, and prepare data for the controllers. The Service class does not use any repository layer like JPA. Instead, it directly connects to GridDB, which is a database, using a GridDbClient. The Service class implements the interface, which means it must provide methods like findAll() to get all rows and get() to find one by ID, create to add a new row, and others. When fetching data, it sends requests to GridDB to get rows, then maps those rows into DTO objects. For saving or updating, it builds a string in JSON format with data and sends it to GridDB Cloud. It also generates unique IDs using TsidCreator and handles date times carefully by parsing and formatting them. @Service public class RegistrationGridDBService implements RegistrationService { private final Logger log = LoggerFactory.getLogger(getClass()); private final GridDbClient gridDbClient; private final String TBL_NAME = “VoMaRegistrations”; public RegistrationGridDBService(final GridDbClient gridDbClient) { this.gridDbClient = gridDbClient; } public void createTable() { List<GridDbColumn> columns = List.of( new GridDbColumn(“id”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“userId”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“opportunityId”, “STRING”, Set.of(“TREE”)), new GridDbColumn(“status”, “STRING”), new GridDbColumn(“registrationTime”, “TIMESTAMP”)); GridDbContainerDefinition containerDefinition = GridDbContainerDefinition.build(TBL_NAME, columns); this.gridDbClient.createContainer(containerDefinition); } @Override public List<RegistrationDTO> findAll() { AcquireRowsRequest requestBody = AcquireRowsRequest.builder().limit(50L).sort(“id ASC”).build(); AcquireRowsResponse response = this.gridDbClient.acquireRows(TBL_NAME, requestBody); if (response == null || response.getRows() == null) { log.error(“Failed to acquire rows from GridDB”); return List.of(); } return response.getRows().stream() .map(row -> { return extractRowToDTO(row); }) .collect(Collectors.toList()); } private RegistrationDTO extractRowToDTO(List<Object> row) { RegistrationDTO dto = new RegistrationDTO(); dto.setId((String) row.get(0)); dto.setUserId((String) row.get(1)); dto.setOpportunityId((String) row.get(2)); try { dto.setStatus(RegistrationStatus.valueOf(row.get(3).toString())); } catch (Exception e) { dto.setStatus(null); } try { dto.setRegistrationTime(DateTimeUtil.parseToLocalDateTime(row.get(4).toString())); } catch (Exception e) { dto.setRegistrationTime(null); } return dto; } @Override public RegistrationDTO get(final String id) { AcquireRowsRequest requestBody = AcquireRowsRequest.builder() .limit(1L) .condition(“id == ‘” + id + “‘”) .build(); AcquireRowsResponse response = this.gridDbClient.acquireRows(TBL_NAME, requestBody); if (response == null || response.getRows() == null) { log.error(“Failed to acquire rows from GridDB”); throw new NotFoundException(“Registration not found with id: ” + id); } return response.getRows().stream() .findFirst() .map(row -> { return extractRowToDTO(row); }) .orElseThrow(() -> new NotFoundException(“Registration not found with id: ” + id)); } public String nextId() { return TsidCreator.getTsid().format(“reg_%s”); } @Override public String register(String userId, String opportunityId) { RegistrationDTO registrationDTO = new RegistrationDTO(); registrationDTO.setUserId(userId); registrationDTO.setOpportunityId(opportunityId); registrationDTO.setStatus(RegistrationStatus.PENDING); registrationDTO.setRegistrationTime(LocalDateTime.now()); return create(registrationDTO); } } Implement the validation We create a dedicated Service class for validating volunteer registration requests against opportunity requirements. Some benefit from this approach: Hides the complexity. If the rules change later (e.g., “User needs 2 out of 3 skills”), we only change that one place Business validation logic isolated from HTTP concern Validation service can be used by REST APIs or other controllers Service can be unit tested independently Clear, focused exception handling with rich context @Service public class RegistrationValidationService { private final Logger log = LoggerFactory.getLogger(getClass()); private final RegistrationService registrationService; private final OpportunityService opportunityService; private final OpportunityRequirementService opportunityRequirementService; private final VolunteerSkillService volunteerSkillService; private final SkillService skillService; public RegistrationValidationService( final RegistrationService registrationService, final OpportunityService opportunityService, final OpportunityRequirementService opportunityRequirementService, final VolunteerSkillService volunteerSkillService, final SkillService skillService) { this.registrationService = registrationService; this.opportunityService = opportunityService; this.opportunityRequirementService = opportunityRequirementService; this.volunteerSkillService = volunteerSkillService; this.skillService = skillService; } public void validateRegistration(final String userId, final String opportunityId) { // Check 1: User not already registered validateNotAlreadyRegistered(userId, opportunityId); // Check 2: Opportunity has available slots validateSlotsAvailable(opportunityId); // Check 3: User has mandatory skills validateMandatorySkills(userId, opportunityId); } private void validateNotAlreadyRegistered(final String userId, final String opportunityId) { Optional existingReg = registrationService.getByUserIdAndOpportunityId(userId, opportunityId); if (existingReg.isPresent()) { throw new AlreadyRegisteredException(userId, opportunityId); } } private void validateSlotsAvailable(final String opportunityId) { OpportunityDTO opportunity = opportunityService.get(opportunityId); Long registeredCount = registrationService.countByOpportunityId(opportunityId); if (registeredCount >= opportunity.getSlotsTotal()) { throw new OpportunitySlotsFullException(opportunityId, opportunity.getSlotsTotal(), registeredCount); } } private void validateMandatorySkills(final String userId, final String opportunityId) { List userSkills = volunteerSkillService.findAllByUserId(userId); List opportunityRequirements = opportunityRequirementService.findAllByOpportunityId(opportunityId); for (OpportunityRequirementDTO requirement : opportunityRequirements) { if (!requirement.getIsMandatory()) { continue; } boolean hasSkill = userSkills.stream() .anyMatch(userSkill -> userSkill.getSkillId().equals(requirement.getSkillId())); if (!hasSkill) { SkillDTO skill = skillService.get(requirement.getSkillId()); String skillName = skill != null ? skill.getName() : “Unknown Skill”; throw new MissingMandatorySkillException(userId, opportunityId, requirement.getSkillId(), skillName); } } } } This service depends on 5 collaborating services (Opportunity, OpportunityRequirement, VolunteerSkill, Skill, Registration) and throws custom exceptions for each validation failure, allowing callers to handle different error scenarios appropriately (e.g., different error messages, logging, etc). HTTP Layer Now, we need a class that handles all incoming web requests, processes user input, and sends back responses. It’s the bridge between the user’s browser and the application’s code logic. @Controller @RequestMapping(“/opportunities”) public class OpportunityController { private final Logger log = LoggerFactory.getLogger(getClass()); private final OpportunityService opportunityService; private final RegistrationService registrationService; private final RegistrationValidationService registrationValidationService; private final UserService userService; private final OpportunityRequirementService opportunityRequirementService; private final SkillService skillService; public OpportunityController( final OpportunityService opportunityService, final RegistrationService registrationService, final RegistrationValidationService registrationValidationService, final UserService userService, final OpportunityRequirementService opportunityRequirementService, final SkillService skillService) { this.opportunityService = opportunityService; this.registrationService = registrationService; this.registrationValidationService = registrationValidationService; this.userService = userService; this.opportunityRequirementService = opportunityRequirementService; this.skillService = skillService; } @GetMapping public String list(final Model model, @AuthenticationPrincipal final CustomUserDetails userDetails) { List allOpportunities = new ArrayList(); UserDTO user = userDetails != null ? userService.getOneByEmail(userDetails.getUsername()).orElse(null) : null; if (userDetails != null && userDetails.getOrganizations() != null && !userDetails.getOrganizations().isEmpty()) { OrganizationDTO org = userDetails.getOrganizations().get(0); model.addAttribute(“organization”, org); allOpportunities = opportunityService.findAllByOrgId(org.getId()); } else { model.addAttribute(“organization”, null); allOpportunities = opportunityService.findAll(); } List opportunities = extractOpportunities(allOpportunities, user); model.addAttribute(“opportunities”, opportunities); return “opportunity/list”; } @GetMapping(“/add”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String add( @ModelAttribute(“opportunity”) final OpportunityDTO opportunityDTO, final Model model, @AuthenticationPrincipal final CustomUserDetails userDetails) { if (userDetails != null && userDetails.getOrganizations() != null && !userDetails.getOrganizations().isEmpty()) { OrganizationDTO org = userDetails.getOrganizations().get(0); opportunityDTO.setOrgId(org.getId()); } opportunityDTO.setId(opportunityService.nextId()); return “opportunity/add”; } @PostMapping(“/add”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String add( @ModelAttribute(“opportunity”) @Valid final OpportunityDTO opportunityDTO, final BindingResult bindingResult, final RedirectAttributes redirectAttributes) { if (bindingResult.hasErrors()) { return “opportunity/add”; } opportunityService.create(opportunityDTO); redirectAttributes.addFlashAttribute(WebUtils.MSG_SUCCESS, WebUtils.getMessage(“opportunity.create.success”)); return “redirect:/opportunities”; } @GetMapping(“/edit/{id}”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String edit(@PathVariable(name = “id”) final String id, final Model model) { model.addAttribute(“opportunity”, opportunityService.get(id)); return “opportunity/edit”; } @PostMapping(“/edit/{id}”) @PreAuthorize(SecurityExpressions.ORGANIZER_ONLY) public String edit( @PathVariable(name = “id”) final String id, @ModelAttribute(“opportunity”) @Valid final OpportunityDTO opportunityDTO, final BindingResult bindingResult, final RedirectAttributes redirectAttributes) { if (bindingResult.hasErrors()) { return “opportunity/edit”; } opportunityService.update(id, opportunityDTO); redirectAttributes.addFlashAttribute(WebUtils.MSG_SUCCESS, WebUtils.getMessage(“opportunity.update.success”)); return “redirect:/opportunities”; } @PostMapping(“/{id}/registrations”) public String registrations( @PathVariable(name = “id”) final String opportunityId, final RedirectAttributes redirectAttributes, @AuthenticationPrincipal final UserDetails userDetails) { UserDTO user = userService .getOneByEmail(userDetails.getUsername()) .orElseThrow(() -> new UsernameNotFoundException(“User not found”)); try { // Validate registration using the validation service registrationValidationService.validateRegistration(user.getId(), opportunityId); // If validation passes, proceed with registration OpportunityDTO opportunityDTO = opportunityService.get(opportunityId); registrationService.register(user.getId(), opportunityId); log.debug( “Registration Successful – user: {}, opportunity: {}”, user.getFullName(), opportunityDTO.getTitle()); redirectAttributes.addFlashAttribute( WebUtils.MSG_INFO, WebUtils.getMessage(“opportunity.registrations.success”)); return “redirect:/opportunities/” + opportunityId; } catch (AlreadyRegisteredException e) { redirectAttributes.addFlashAttribute( WebUtils.MSG_ERROR, WebUtils.getMessage(“opportunity.registrations.already_registered”)); return “redirect:/opportunities/” + opportunityId; } catch (OpportunitySlotsFullException e) { redirectAttributes.addFlashAttribute( WebUtils.MSG_ERROR, WebUtils.getMessage(“opportunity.registrations.full”)); return “redirect:/opportunities/” + opportunityId; } catch (MissingMandatorySkillException e) { redirectAttributes.addFlashAttribute( WebUtils.MSG_ERROR, WebUtils.getMessage(“opportunity.registrations.missing_skill”, e.getSkillName())); return “redirect:/opportunities/” + opportunityId; } } } The OpportunityController.java: Doesn’t do the work itself; it delegates to specialized services. This keeps code organized and reusable. Manages everything related to /opportunities URLs, for example, listing volunteer opportunities. Receives services it needs using constructor injection. @PreAuthorize ensures only authorized users perform actions Validate registration using registrationValidationService. If validation fails, catches specific exceptions and shows error messages. Clean Controller: focus on orchestration only User Interface Preview Listing opportunity page: Register page: Conclusion Building a volunteer-matching web for a health event is a practical project that trains core skills: Spring Boot service design, server-rendered Thymeleaf UI, Cloud NoSQL integration, and RBAC. Feel free to add more feature like email notifications or calendar integration. Keep building, keep

Modern Ecological Research is largely data-driven, with actionable insights and decisions made using massive, complex datasets.

With GridDB Cloud now having the ability to connect to your code through what is known as non-webapi, aka through its native NoSQL interface (Java, Python, etc), we can now explore connecting to various Azure Services through the virtual network peering. Because our GridDB Cloud instance is connected to anything connected to our Virtual Network thanks to the peering connection, anything that allows connection to a virtual network can now directly communicate with GridDB Cloud. Note: this is only available for the GridDB Cloud offered at Microsoft Azure Marketplace; the GridDB Cloud Free Plan from the Toshiba page does not support VNET peering. Source code found on the griddbnet github: $ git clone https://github.com/griddbnet/Blogs.git –branch azure_connected_services Introduction In this article, we will explore connecting our GridDB Cloud instance to Azure’s IoT Hub to store telemetry data. We have previously made a web course on how to set up the Azure IoT Hub with GridDB Cloud but through the Web API. That can be found here: https://www.udemy.com/course/griddb-and-azure-iot-hub/?srsltid=AfmBOopFTwFHI7OvQOEXt4P_cWxuo3NaJ9XkbNDHHWX5Tgky4QZzJlD3. You can also learn about how to connect your GridDB Cloud instance to your Azure virtual network through the v-net peering here: https://griddb.net/en/blog/griddb-cloud-v3-1-how-to-use-the-native-apis-with-azures-vnet-peering/. As a bonus, we have also made a blog on how to connect your local environment to your cloud-hosted GridDB instance through a VPN to be able to just use your local programming environment; blog here: GridDB Cloud v3.1 – How to Use the Native APIs with Azure’s VNET Peering So for this one, let’s get started with our IoT Hub implementation. We will be setting up an IoT Hub with any number of devices which will trigger a GridDB write whenever telemetry data is detected. We will then also set up another Azure Function which will run on a simple timer (every 1 hour) that will run a simple aggregation of the IoT Sensor data to keep the data tidy and data analysis. There is also source code for setting up a Kafka connection through a timer which will read all data from within the past 5 minutes and stream that data out through Kafka, but we won’t discuss it here. Azure’s Cloud Infrastructure Let’s talk briefly about Azure’s services that we will need to master and use to get all of this running. First, the IoT Hub Azure’s IoT Hub You can read about what the IoT Hub does here: https://learn.microsoft.com/en-us/azure/iot-hub/. The purpose of it is to make it easy to manage a fleet of IoT sensors which exist in the real world, emitting data at intervals which needs to be stored and analyzed. For this article, we will simply create one virtual device of which we will push data through a python script provided by Microsoft (source code here: https://github.com/Azure/azure-iot-sdk-python). You can learn how to create the IoT Hub and how to deploy code/functions through an older blog: https://griddb.net/en/blog/iot-hub-azure-griddb-cloud/. Because this information is here, we will continue on assuming you have already built the IoT Hub in your Azure and we will just discuss the source code needed to get our data to GridDB Cloud through the native Java API. Azure Functions The real glue of this set up is our Azure Functions. For this article, I created an Azure Function Standard Plan. From there, I connected the standard plan to the virtual network which is already peer-connected to our GridDB Cloud instance. With this simple step, all of our Azure Functions which we deploy and use on this app service plan will already be able to communicate with GridDB Cloud seamlessly. And for our Azure Function that combines with the IoT Hub to detect events and use that data to run some code, we will use a specific function binding in our java code: @EventHubTrigger(name = “message”, eventHubName = “events”, connection = “IotHubConnectionString”, consumerGroup = “myfuncapp-cg”, cardinality = Cardinality.ONE) String message,. In this case, we are telling our Azure Function that whenever an event occurs in our IoT Hub (as defined in the IoTHubConnectionString), we want to run the following code. The magic is all contained within Azure Functions and that IoTHubConnectionString, which is gathered in the IoT Hub called: primary connection string. So in your Azure Function, when you create it, you should head to Settings -> Environment Variables. And set the IoTHubConnectionString as well as your GridDB Cloud credentials. If you are using VSCode, you can set these vars in your local.settings.json file created when you select Azure Functions Create Function App (as mentioned in the blog linked above) and then do Azure Functions: Deploy Local Settings. IoT Hub Event Triggering Now let’s look at the actual source code that pushes data to GridDB Cloud. Java Source Code for Pushing Event Telemetry Data Our goal here is to log all of our sensors’ within the IoT Hub’s data into persistent storage (aka GridDB Cloud). To do this, we use Azure Functions and their special bindings/triggers. In this case, we want to detect whenever our IoT Hub’s sensors receive telemetry data, which will then fire off our java code which will forge a connection to GridDB through its NoSQL interface and simply write that row of data. Here is the main method in Java: public class IotTelemetryHandler { private static GridDB griddb = null; private static final ObjectMapper MAPPER = new ObjectMapper(); @FunctionName(“IoTHubTrigger”) public void run( @EventHubTrigger(name = “message”, eventHubName = “events”, connection = “IotHubConnectionString”, consumerGroup = “myfuncapp-cg”, cardinality = Cardinality.ONE) String message, @BindingName(“SystemProperties”) Map properties, final ExecutionContext context) { TelemetryData data; try { data = MAPPER.readValue(message, TelemetryData.class); } catch (Exception e) { context.getLogger().severe(“Failed to parse JSON message: ” + e.getMessage()); context.getLogger().severe(“Raw Message: ” + message); return; } try { context.getLogger().info(“Java Event Hub trigger processed a message: ” + message); String deviceId = properties.get(“iothub-connection-device-id”).toString(); String eventTimeIso = properties.get(“iothub-enqueuedtime”).toString(); Instant enqueuedInstant = Instant.parse(eventTimeIso); long eventTimeMillis = enqueuedInstant.toEpochMilli(); Timestamp dbTimestamp = new Timestamp(eventTimeMillis); data.ts = dbTimestamp; context.getLogger().info(“Data received from Device: ” + deviceId); griddb = new GridDB(); String containerName = “telemetryData”; griddb.CreateContainer(containerName); griddb.WriteToContainer(containerName, data); context.getLogger().info(“Successfully saved to DB.”); } catch (Throwable t) { context.getLogger().severe(“CRITICAL: Function execution failed with exception:”); context.getLogger().severe(t.toString()); // throw new RuntimeException(“GridDB processing failed”, t); } } } The Java code itself is vanilla, it’s what the Azure Functions bindings do that is the real magic. As explained above, using the IoT Hub connection string directs what events are being polled to grab those values and eventually be written to GridDB. Data Aggregation So now we’ve got thousands of rows of data from our sensors inside of our DB. A typical workflow in this scenario might be a separate service which runs aggregations on a timer to help manage the data or keep around an easy-to-reference snapshot of the data in your sensors. Python is a popular vehicle for running data-science-y type operations, so let’s set up the GridDB Python client and let’s run a simple average function every hour. Python Client While using Java in the Azure function works out of the box (as shown above), the python client has some requirements for installing and being run. Specifically, we need to actually have Java installed, as well as some special-built java jar files. The easiest way to get this sort of environment set up in an Azure Function is to use Docker. With Docker, we can include all of the libraries and instructions needed to install the python client and deploy the container with all source code as is. The Python script will then run on a timer every 1 hour and write to a new GridDB Cloud table which will keep track of the hourly aggregates of each data point. Dockerize Python Client To dockerize our python client, we need to convert the instructions on how to install the python client into docker instructions, as well as copy the source code and credentials. Here is what the Dockerfile looks like: FROM mcr.microsoft.com/azure-functions/python:4-python3.12 ENV AzureWebJobsScriptRoot=/home/site/wwwroot \ AzureFunctionsJobHost__Logging__Console__IsEnabled=true ENV PYTHONBUFFERED=1 ENV GRIDDB_NOTIFICATION_PROVIDER=”[notification_provider]” ENV GRIDDB_CLUSTER_NAME=”[clustername]” ENV GRIDDB_USERNAME=”[griddb-user]” ENV GRIDDB_PASSWORD=”[password]” ENV GRIDDB_DATABASE=”[database]” WORKDIR /home/site/wwwroot RUN apt-get update && \ apt-get install -y default-jdk git maven && \ rm -rf /var/lib/apt/lists/* ENV JAVA_HOME=/usr/lib/jvm/default-java WORKDIR /tmp RUN git clone https://github.com/griddb/python_client.git && \ cd python_client/java && \ mvn install RUN mkdir -p /home/site/wwwroot/lib && \ mv /tmp/python_client/java/target/gridstore-arrow-5.8.0.jar /home/site/wwwroot/lib/gridstore-arrow.jar WORKDIR /tmp/python_client/python RUN python3.12 -m pip install . WORKDIR /home/site/wwwroot COPY ./lib/gridstore.jar /home/site/wwwroot/lib/ COPY ./lib/arrow-memory-netty.jar /home/site/wwwroot/lib/ COPY ./lib/gridstore-jdbc.jar /home/site/wwwroot/lib/ COPY *.py . COPY requirements.txt . RUN python3.12 -m pip install -r requirements.txt ENV CLASSPATH=/home/site/wwwroot/lib/gridstore.jar:/home/site/wwwroot/lib/gridstore-jdbc.jar:/home/site/wwwroot/lib/gridstore-arrow.jar:/home/site/wwwroot/lib/arrow-memory-netty.jar Once in place, you do the normal docker build, docker tag, docker push. But there is one caveat! Azure Container Registry Though not necessary, setting up your own Azure Container Registry(acr) (think Dockerhub) to host your images makes life a whole lot simpler for deploying your code to Azure Functions. So in my case, I set up an acr, and then pushed my built images into that repository. Once there, I went to the deployment center of my new python Azure Function and selected my container’s name etc. From there, it will deploy and run based on your stipulations. Cool! Python Code to do Data Aggregation Similar to the Java implementation above, we will use the Azure Function bindings/trigger on the python code to use a cron-style layout for the timer. Under the hood, the Azure Function infrastructure will run the function every 1 hour based on our setting. The code itself is also vanilla: we will query data from our table written to above from the past 1 hour, find the averages, and then write that data back to GridDB Cloud on another table. Note that since this function solely relies on the Azure Function timer and GridDB, there is no need for special IoT Hub Connection String-type connection strings to grab. Here is the main python that Azure will run when the time is right: import logging import azure.functions as func import griddb_python as griddb from griddb_connector import GridDB from griddb_sql import GridDBJdbc from datetime import datetime import pyarrow as pa import pandas as pd import sys app = func.FunctionApp() @app.timer_trigger(schedule=”0 0 * * * *”, arg_name=”myTimer”, run_on_startup=True, use_monitor=False) def aggregations(myTimer: func.TimerRequest) -> None: if myTimer.past_due: logging.info(‘The timer is past due!’) logging.info(‘Python timer trigger function executed.’) nosql = None store = None ra = None griddb_jdbc = None try: print(“Attempting to connect to GridDB…”) nosql = GridDB() store = nosql.get_store() ra = griddb.RootAllocator(sys.maxsize) if not store: print(“Connection failed. Exiting script.”) sys.exit(1) griddb_jdbc = GridDBJdbc() if griddb_jdbc.conn: averages = griddb_jdbc.calculate_avg() nosql.pushAvg(averages) print(“\nScript finished successfully.”) except Exception as e: print(f”A critical error occurred in main: {e}”) finally: print(“Script execution complete.”) The rest of the code isn’t very interesting but let’s take a brief look. Here we are querying the last 1 hour of data and calculating the averages: def calculate_avg(self): try: curs = self.conn.cursor() queryStr = ‘SELECT temperature, pressure, humidity FROM telemetryData WHERE ts BETWEEN TIMESTAMP_ADD(HOUR, NOW(), -1) AND NOW();’ curs.execute(queryStr) if curs.description is None: print(“Query returned no results or failed.”) return None column_names = [desc[0] for desc in curs.description] all_rows = curs.fetchall() if not all_rows: print(“No data found for the query range.”) return None results = {name.lower(): [] for name in column_names} for row in all_rows: for i, name in enumerate(column_names): results[name.lower()].append(row[i]) averages = { ‘temperature’: statistics.mean(results[‘temperature’]), ‘humidity’: statistics.mean(results[‘humidity’]), ‘pressure’: statistics.mean(results[‘pressure’]) } return averages Note: for this function, we created the table beforehand (not in the Python code). Bonus Azure Kafka Event Hub We also set up an Event Hub function to query the last 5 minutes of telemetry data and stream it through Kafka. We ended up leaving this as dangling, but I’ve included it here because the source code already exists. It also uses a timer trigger and relies solely on the connection to GridDB Cloud. Azure’s Event Hub handles all of the complicated Kafka stuff under the hood, we just needed to return the data to be pushed through Kafka. Here is the source code: package net.griddb; import com.microsoft.azure.functions.ExecutionContext; import com.microsoft.azure.functions.annotation.EventHubOutput; import com.microsoft.azure.functions.annotation.FunctionName; import com.microsoft.azure.functions.annotation.TimerTrigger; import java.sql.SQLException; import java.util.ArrayList; import java.util.List; import java.util.logging.Level; public class GridDBPublisher { @FunctionName(“GridDBPublisher”) @EventHubOutput(name = “outputEvent”, eventHubName = “griddb-telemetry”,
