{"id":52082,"date":"2025-04-02T00:00:00","date_gmt":"2025-04-02T07:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/"},"modified":"2025-04-02T00:00:00","modified_gmt":"2025-04-02T07:00:00","slug":"leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos","status":"publish","type":"post","link":"https:\/\/www.griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/","title":{"rendered":"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos"},"content":{"rendered":"<h2><strong>Introduction<\/strong><\/h2>\n<p>This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a tech stack that includes Node.js for backend operations, GridDB for managing video metadata, OpenAI for AI-driven text and voice generation, and React for building an interactive frontend.<\/p>\n<h2>Run The Application<\/h2>\n<p>Clone the repository from this <a href=\"https:\/\/github.com\/junwatu\/ai-narrative-and-voices\">link<\/a> or run the following commands:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">git clone https:\/\/github.com\/junwatu\/ai-narrative-and-voices.git\ncd ai-narrative-and-voices\ncd app\nnpm install<\/code><\/pre>\n<\/div>\n<p>Copy the <code>.env.example<\/code> file to <code>.env<\/code> and set the <code>VITE_APP_URL<\/code> environment variable or leave it by default and set the <code>OPENAI_API_KEY<\/code> environment variable (please look at this section for more details on how to <a href=\"#openai-key\">get the OpenAI API key<\/a>).<\/p>\n<p>To run the application, execute the following command:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">npm run start:build<\/code><\/pre>\n<\/div>\n<p>Open the browser and navigate to <code>http:\/\/localhost:3000\/<\/code>.<\/p>\n<blockquote>\n<p>You can customize the app address and port by setting the <code>VITE_SITE_URL<\/code> environment variable in the <code>.env<\/code> file.<\/p>\n<\/blockquote>\n<h2><strong>Solving the Problem<\/strong><\/h2>\n<p>Creating compelling narratives and attention-grabbing titles for documentary videos presents significant challenges due to:<\/p>\n<ul>\n<li><strong>Time-Consuming Process<\/strong>: Manually crafting narratives and titles is lengthy and often leads to delays, particularly under tight production schedules.<\/li>\n<li><strong>Creative Blocks<\/strong>: Writers frequently face creative blocks, hindering the consistent generation of fresh, engaging content.<\/li>\n<li><strong>Scalability Issues<\/strong>: Maintaining consistent quality across multiple projects becomes increasingly difficult as content volume grows.<\/li>\n<\/ul>\n<h2><strong>Tech Stack Overview<\/strong><\/h2>\n<h3>OpenAI Key<\/h3>\n<p>To access any OpenAI services, we need a valid key. Go to this <a href=\"https:\/\/platform.openai.com\/api-keys\">link<\/a> and create a new OpenAI key.<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/openai-key.png\"><img fetchpriority=\"high\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/openai-key.png\" alt=\"\" width=\"2966\" height=\"1470\" class=\"aligncenter size-full wp-image-31425\" srcset=\"\/wp-content\/uploads\/2025\/04\/openai-key.png 2966w, \/wp-content\/uploads\/2025\/04\/openai-key-300x149.png 300w, \/wp-content\/uploads\/2025\/04\/openai-key-1024x508.png 1024w, \/wp-content\/uploads\/2025\/04\/openai-key-768x381.png 768w, \/wp-content\/uploads\/2025\/04\/openai-key-1536x761.png 1536w, \/wp-content\/uploads\/2025\/04\/openai-key-2048x1015.png 2048w, \/wp-content\/uploads\/2025\/04\/openai-key-600x297.png 600w\" sizes=\"(max-width: 2966px) 100vw, 2966px\" \/><\/a><\/p>\n<p>The OpenAI key is on a project basis, so we need to create a project first in the OpenAI platform and you need also to enable any models that you use on a project. For this project, we will need <code>gpt-4o<\/code> or <code>gpt-4o-2024-08-06<\/code>, <code>gpt-4o-mini<\/code> and <code>tts-1<\/code> models.<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/openai-enabled-models.png\"><img decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/openai-enabled-models.png\" alt=\"\" width=\"2980\" height=\"1652\" class=\"aligncenter size-full wp-image-31424\" srcset=\"\/wp-content\/uploads\/2025\/04\/openai-enabled-models.png 2980w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-300x166.png 300w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-1024x568.png 1024w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-768x426.png 768w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-1536x852.png 1536w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-2048x1135.png 2048w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-1170x650.png 1170w, \/wp-content\/uploads\/2025\/04\/openai-enabled-models-600x333.png 600w\" sizes=\"(max-width: 2980px) 100vw, 2980px\" \/><\/a><\/p>\n<p>The OpenAI key will be saved on the <code>.env<\/code> file and make sure not to include it in version control by adding it to the <code>.gitignore<\/code>.<\/p>\n<h3>Node.js<\/h3>\n<p>This project will run on the Node.js platform. You need to install it from <a href=\"https:\/\/nodejs.org\/en\/download\">here<\/a>. For this project, we will use the <code>nvm<\/code> package manager and Node.js v16.20.2<br \/>\nLTS version.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\"># installs nvm (Node Version Manager)\ncurl -o- https:\/\/raw.githubusercontent.com\/nvm-sh\/nvm\/v0.39.7\/install.sh | bash\n\n# download and install Node.js\nnvm install 16\n\n# verifies the right Node.js version is in the environment\nnode -v # should print `v16.20.2`\n\n# verifies the right NPM version is in the environment\nnpm -v # should print `8.19.4``<\/code><\/pre>\n<\/div>\n<p>To connect Node.js and GridDB database, you need the <a href=\"https:\/\/github.com\/nodejs\/node-addon-api\">gridb-node-api<\/a> npm package which is a Node.js binding developed using GridDB C Client and Node addon API.<\/p>\n<h3>FFmpeg<\/h3>\n<p>This project utilizes the <a href=\"https:\/\/www.npmjs.com\/package\/fluent-ffmpeg\"><code>fluent-ffmpeg<\/code><\/a> npm package, which requires FFmpeg to be installed on the system. For Ubuntu, you can use the following command to install it:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">sudo apt update\nsudo apt install ffmpeg<\/code><\/pre>\n<\/div>\n<p>For more installation information, please go to the <a href=\"https:\/\/ffmpeg.org\/\">FFmpeg official website<\/a>.<\/p>\n<h3>GridDB<\/h3>\n<p>To save the video summary and video data, we will use the GridDB database. Please look at the <a href=\"https:\/\/docs.griddb.net\/latest\/gettingstarted\/using-apt\/#install-with-apt-get\">guide<\/a> for detailed installation. We will use Ubuntu 20.04 LTS here.<\/p>\n<p>Run GridDB and check if the service is running. Use this command:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">sudo systemctl status gridstore<\/code><\/pre>\n<\/div>\n<p>If not running try to run the database with this command:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">sudo systemctl start gridstore<\/code><\/pre>\n<\/div>\n<h3>React<\/h3>\n<p>We will use <a href=\"https:\/\/react.dev\/\">React<\/a> to build the front end of the application. React lets you build user interfaces out of individual pieces called components. So if you want to expand or modify the application, you can easily do so by adding or modifying components.<\/p>\n<h2><strong>System Architecture<\/strong><\/h2>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/system-arch.png\"><img decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/system-arch.png\" alt=\"\" width=\"861\" height=\"480\" class=\"aligncenter size-full wp-image-31428\" srcset=\"\/wp-content\/uploads\/2025\/04\/system-arch.png 861w, \/wp-content\/uploads\/2025\/04\/system-arch-300x167.png 300w, \/wp-content\/uploads\/2025\/04\/system-arch-768x428.png 768w, \/wp-content\/uploads\/2025\/04\/system-arch-150x85.png 150w, \/wp-content\/uploads\/2025\/04\/system-arch-600x334.png 600w\" sizes=\"(max-width: 861px) 100vw, 861px\" \/><\/a><\/p>\n<ol>\n<li><strong>Video Upload:<\/strong> The browser uploads the video to the Node.js backend for processing.<\/li>\n<li><strong>Video Processing:<\/strong> Node.js sends the video to FFmpeg for processing tasks like encoding, decoding, or frame extraction.<\/li>\n<li><strong>Processed Video Retrieval:<\/strong> FFmpeg processes the video and returns the processed data to Node.js.<\/li>\n<li><strong>AI Content Generation:<\/strong> Node.js sends the processed video data to OpenAI for generating narrative voices and titles.<\/li>\n<li><strong>Metadata Storage:<\/strong> Node.js stores the video metadata and AI-generated content in GridDB.<\/li>\n<li><strong>Frontend Interaction:<\/strong> Node.js sends the necessary data to the React frontend for user interaction and display.<\/li>\n<\/ol>\n<h2><strong>Node.js Server<\/strong><\/h2>\n<p>Node.js server is the core of the application. It is responsible for the following tasks:<\/p>\n<ul>\n<li><a href=\"#video-upload\">Handle the video upload<\/a><\/li>\n<li><a href=\"#frame-extraction\">Frame extraction<\/a><\/li>\n<li><a href=\"#ai-content-generation\">AI content generation<\/a><\/li>\n<li><a href=\"#audio-voice-generation\">Audio Voice Generation<\/a><\/li>\n<li><a href=\"#storing-video-metadata-in-griddb\">Storing Data To GridDB<\/a><\/li>\n<li><a href=\"#routes\">Routes<\/a><\/li>\n<\/ul>\n<p>The server code is:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">import express from 'express'\nimport path from 'path'\nimport bodyParser from 'body-parser'\nimport metadataRoutes from '.\/routes\/metadataRoutes.js'\nimport uploadRoutes from '.\/routes\/uploadRoutes.js'\nimport { __dirname } from '.\/dirname.js'\n\nconst app = express()\n\nif (!process.env.VITE_APP_URL) {\n    throw new Error('VITE_APP_URL environment variable is not set')\n}\nconst apiURL = new URL(process.env.VITE_APP_URL)\nconst HOSTNAME = apiURL.hostname || 'localhost'\nconst PORT = apiURL.port || 3000\n\napp.use(bodyParser.json({ limit: '10mb' }))\napp.use(express.static(path.join(__dirname, 'dist')))\napp.use(express.static(path.join(__dirname, 'public')))\napp.use(express.static(path.join(__dirname, 'audio')))\napp.use(express.static(path.join(__dirname, 'uploads')))\n\napp.get('\/', (req, res) => {\n    res.sendFile(path.join(__dirname, 'dist', 'index.html'))\n})\n\napp.use('\/api', uploadRoutes)\napp.use('\/api\/metadata', metadataRoutes)\n\napp.listen(PORT, HOSTNAME, () => {\n    console.log(`Server is running on http:\/\/${HOSTNAME}:${PORT}`)\n})<\/code><\/pre>\n<\/div>\n<p>The node.js server provides routes and exposes dist, public, audio, and uploads directories to the client. The audio and upload directories are needed so later the client will be able to download the generated audio and original video files.<\/p>\n<h3>Video Upload<\/h3>\n<p>The <code>api\/upload<\/code> route handles the video upload and saves the video in the <code>uploads<\/code> folder.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">app.use('\/api', uploadRoutes)<\/code><\/pre>\n<\/div>\n<p>The <code>uploadRoutes<\/code> is defined in the <code>routes\/uploadRoutes.js<\/code> file.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">router.post('\/upload', upload.single('file'), async (req, res) => {\n    if (!req.file) {\n        return res.status(400).send('No file uploaded or invalid file type.')\n    }\n\n    try {\n        \/\/ relative path\n        const videoPath = path.join('uploads', req.file.filename)\n        const { base64Frames, duration } = await processVideo(videoPath)\n        \/\/ send frames to OpenAI\n        const { narrative, title, voice } = await generateNarrative(base64Frames, duration)\n\n        await saveDocumentaryMetadata({\n            video: videoPath, audio: voice, narrative, title\n        })\n\n        res.json({\n            filename: req.file.filename,\n            narrative,\n            title,\n            voice\n        })\n    } catch (error) {\n        console.error('Error processing video:', error)\n        res.status(500).send('Error processing video')\n    }\n})<\/code><\/pre>\n<\/div>\n<p>This route is used to process the video and extract the frames and will return the base64 frames of the video and later will be sent to OpenAI for generating the narrative voices and titles. This route returns JSON data for client-side display.<\/p>\n<h3>Frame Extraction<\/h3>\n<p>The <code>processVideo<\/code> function is defined in the <code>libs\/videoprocessor.js<\/code> file. This function uses the <code>ffmpeg<\/code> package to extract the frames from the video.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">export function extractFrames(videoPath, secondsPerFrame, outputFolder, scaleFactor = 0.5) {\n    return new Promise((resolve, reject) => {\n        const frameRate = 1 \/ secondsPerFrame\n        const framePattern = path.join(outputFolder, 'frame-%03d.png')\n        const resizeOptions = `fps=${frameRate},scale=iw*${scaleFactor}:ih*${scaleFactor}`\n\n        ffmpeg(videoPath)\n            .outputOptions([`-vf ${resizeOptions}`])\n            .output(framePattern)\n            .on('end', () => {\n                fs.readdir(outputFolder, (err, files) => {\n                    if (err) {\n                        reject(err)\n                    } else {\n                        const framePaths = files.map(file => path.join(outputFolder, file))\n                        resolve(framePaths)\n                    }\n                })\n            })\n            .on('error', reject)\n            .run()\n    })\n}<\/code><\/pre>\n<\/div>\n<p>The default seconds per frame is 4 seconds. You can override this by passing the <code>secondsPerFrame<\/code> parameter to the <code>extractFrames<\/code> function. The frames will be saved in the <code>frames<\/code> folder.<\/p>\n<h3>AI Content Generation<\/h3>\n<p>The <code>generateNarrative<\/code> is the function responsible for AI-generated titles, narratives, and audio files.<\/p>\n<h4>Generate Narrative<\/h4>\n<p>The <code>generateNarrative<\/code> function takes the base64 frames of the video as input and returns the narrative, title, and generated audio voice.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">async function generateNarrative(frames) {\n    const videoDuration = 2\n\n    const frameObjects = frames.map(x => ({\n        type: 'image_url',\n        image_url: {\n            url: `data:image\/png;base64,${x}`,\n            detail: \"low\"\n        }\n    }));\n\n    const videoContent = {\n        role: \"user\",\n        content: [{\n                type: 'text',\n                text: `The original video, in which frames are generated  is ${videoDuration} seconds. Create a story based on these frames that fit for exactly ${videoDuration} seconds. BE CREATIVE. DIRECT ANSWER ONLY.`\n            },\n            ...frameObjects\n        ],\n    }\n\n    const response = await openai.chat.completions.create({\n        model: \"gpt-4o\",\n        messages: [{\n                role: \"system\",\n                content: \"You are a professional storyteller.\"\n            },\n            videoContent\n        ],\n        temperature: 1,\n        max_tokens: 4095,\n        top_p: 1,\n        frequency_penalty: 0,\n        presence_penalty: 0,\n        response_format: {\n            \"type\": \"text\"\n        },\n    });\n\n    if (response.choices[0].finish_reason === 'stop') {\n        const narrative = response.choices[0].message.content\n        const title = await generateTitle(narrative)\n\n        const fileName = title.split(' ').join('-').toLowerCase()\n        const voice = await generateSpeechToFile(narrative, 'audio', fileName)\n\n        return {\n            narrative,\n            title,\n            voice\n        }\n    } else {\n        throw new Error('Failed to generate narrative')\n    }\n}<\/code><\/pre>\n<\/div>\n<p>To generate the narrative text, we use prompt engineering to guide the AI model. The prompt is a text that includes the video frames and the video duration:<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-sh\">The original video, in which frames are generated  is ${videoDuration} seconds. Create a story based on these frames that fit for exactly ${videoDuration} seconds. BE CREATIVE. DIRECT ANSWER ONLY.<\/code><\/pre>\n<\/div>\n<p>This function also uses the <code>generateTitle<\/code> function to generate the title and the <code>generateSpeechToFile<\/code> function to generate audio voice.<\/p>\n<h4>Generate Title<\/h4>\n<p>The <code>generateTitle<\/code> function takes the narrative text as input and returns the title.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">async function generateTitle(narrative) {\n    const titleResponse = await openai.chat.completions.create({\n        model: 'gpt-4o-mini',\n        messages: [{\n                role: 'system',\n                content: 'You are a professional storyteller.'\n            },\n            {\n                role: 'user',\n                content: `Direct answer only. Generate a title for the following narrative text: n${narrative}`\n            }\n        ],\n        temperature: 1,\n        max_tokens: 1000,\n        top_p: 1,\n        frequency_penalty: 0,\n        presence_penalty: 0,\n        response_format: {\n            type: 'text'\n        }\n    })\n\n    const title = titleResponse.choices[0].message.content\n    return title\n}<\/code><\/pre>\n<\/div>\n<p>The model used here is <code>gpt-4o-mini<\/code> which is a smaller version of the <code>gpt-4o<\/code> model and it&#8217;s very good to generate a unique title.<\/p>\n<h3>Audio Voice Generation<\/h3>\n<p>The <code>generateSpeechToFile<\/code> function will generate an audio voice based on the given text input. We use the <code>tts-1<\/code> AI model, which is a powerful text-to-speech model from OpenAI. The generated audio style can be selected from a few produced sound styles. In this project, we will use a <code>shimmer<\/code> voice style.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">async function generateSpeechToFile(text, folderPath, fileName, model = 'tts-1', voice = 'shimmer') {\n    try {\n        if (!fs.existsSync(folderPath)) {\n            await fs.promises.mkdir(folderPath, { recursive: true });\n        }\n        \n        const mp3Filename = `${fileName}.mp3`\n        const outputFilePath = path.join(folderPath, mp3Filename);\n        const mp3 = await openai.audio.speech.create({\n            model,\n            voice,\n            input: text,\n        });\n\n        const buffer = Buffer.from(await mp3.arrayBuffer());\n        await fs.promises.writeFile(outputFilePath, buffer);\n        console.log(`File saved at: ${outputFilePath}`);\n        return mp3Filename;\n    } catch (error) {\n        console.error('Error generating speech:', error);\n        throw error;\n    }\n}<\/code><\/pre>\n<\/div>\n<p>The generated audio will be saved as an MP3 file in the specified folder. This audio file can be combined with the original video footage to create a compelling documentary-style video.<\/p>\n<h3>Connect To GridDB<\/h3>\n<p>The <code>griddb.cjs<\/code> file is responsible to connect to the GridDB database and the <code>gridDBService.js<\/code> is a wrapper for easy code. These are the methods we use in this project.<\/p>\n<table>\n<thead>\n<tr>\n<th>Function Name<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td><code>saveDocumentaryMetadata<\/code><\/td>\n<td>Saves documentary metadata (video, audio, narrative, title) to the database.<\/td>\n<\/tr>\n<tr>\n<td><code>getDocumentaryMetadata<\/code><\/td>\n<td>Retrieves documentary metadata by its ID.<\/td>\n<\/tr>\n<tr>\n<td><code>getAllDocumentaryMetadata<\/code><\/td>\n<td>Retrieves all documentary metadata stored in the database.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h3>Storing Video Metadata in GridDB<\/h3>\n<p>The GridDB database will store <strong>the video file path<\/strong>, <strong>audio voice filename<\/strong>, <strong>generated narrative<\/strong>, and <strong>title<\/strong>. This ensures efficient retrieval and management of all essential documentary metadata.<\/p>\n<p>After uploading and processing the video by OpenAI. The metadata will be saved into the database using the <code>saveDocumentaryMetadata<\/code> function.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">await saveDocumentaryMetadata({ video: videoPath, audio: voice, narrative, title })<\/code><\/pre>\n<\/div>\n<p>This function is also accessible directly in the <code>\/api\/metadata<\/code> route using the <code>POST HTTP<\/code> method. Other metadata routes are accessible directly using the <code>\/api\/metadata<\/code> route. Please look at the <a href=\"#routes\">routes section<\/a> for route details.<\/p>\n<h3>Get Videos Metadata<\/h3>\n<p>To get the video metadata, you can use the <code>GET<\/code> method  in the <code>\/api\/metadata<\/code> to retrieve all saved data and use the <code>\/api\/metadata\/:docId<\/code> to get the specific video metadata.<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/ss-ai-narrative.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/ss-ai-narrative.png\" alt=\"\" width=\"1630\" height=\"738\" class=\"aligncenter size-full wp-image-31427\" srcset=\"\/wp-content\/uploads\/2025\/04\/ss-ai-narrative.png 1630w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-300x136.png 300w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-1024x464.png 1024w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-768x348.png 768w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-1536x695.png 1536w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-600x272.png 600w\" sizes=\"(max-width: 1630px) 100vw, 1630px\" \/><\/a><\/p>\n<h3>Get Video By ID<\/h3>\n<p>To get video metadata based on the ID, you can use the <code>GET<\/code> method in the <code>\/api\/metadata\/:id<\/code> route with <code>id<\/code> as the data identifier as the query parameter.<\/p>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid.png\" alt=\"\" width=\"1630\" height=\"738\" class=\"aligncenter size-full wp-image-31426\" srcset=\"\/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid.png 1630w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid-300x136.png 300w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid-1024x464.png 1024w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid-768x348.png 768w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid-1536x695.png 1536w, \/wp-content\/uploads\/2025\/04\/ss-ai-narrative-get-data-byid-600x272.png 600w\" sizes=\"(max-width: 1630px) 100vw, 1630px\" \/><\/a><\/p>\n<h3>Routes<\/h3>\n<p>Here are the routes list for the Node.js server in this project:<\/p>\n<table>\n<thead>\n<tr>\n<th>HTTP Method<\/th>\n<th>Route<\/th>\n<th>Description<\/th>\n<th>File<\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<td>GET<\/td>\n<td><code>\/api\/metadata<\/code><\/td>\n<td>Fetch all documentary metadata<\/td>\n<td><code>metadataRoutes.js<\/code><\/td>\n<\/tr>\n<tr>\n<td>GET<\/td>\n<td><code>\/api\/metadata\/:docId<\/code><\/td>\n<td>Fetch metadata for a specific documentary<\/td>\n<td><code>metadataRoutes.js<\/code><\/td>\n<\/tr>\n<tr>\n<td>POST<\/td>\n<td><code>\/api\/metadata<\/code><\/td>\n<td>Save or update documentary metadata<\/td>\n<td><code>metadataRoutes.js<\/code><\/td>\n<\/tr>\n<tr>\n<td>POST<\/td>\n<td><code>\/api\/upload<\/code><\/td>\n<td>Upload and process a video file (MP4 format only)<\/td>\n<td><code>uploadRoutes.js<\/code><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<h2>User Interface<\/h2>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/ui-screenshot.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/ui-screenshot.png\" alt=\"\" width=\"2110\" height=\"872\" class=\"aligncenter size-full wp-image-31429\" srcset=\"\/wp-content\/uploads\/2025\/04\/ui-screenshot.png 2110w, \/wp-content\/uploads\/2025\/04\/ui-screenshot-300x124.png 300w, \/wp-content\/uploads\/2025\/04\/ui-screenshot-1024x423.png 1024w, \/wp-content\/uploads\/2025\/04\/ui-screenshot-768x317.png 768w, \/wp-content\/uploads\/2025\/04\/ui-screenshot-1536x635.png 1536w, \/wp-content\/uploads\/2025\/04\/ui-screenshot-2048x846.png 2048w, \/wp-content\/uploads\/2025\/04\/ui-screenshot-600x248.png 600w\" sizes=\"(max-width: 2110px) 100vw, 2110px\" \/><\/a><\/p>\n<p>The user interface is built with React.js, providing a modern, component-based architecture. This choice of technology stack enables developers to easily customize and expand the user interface to meet evolving project requirements or incorporate new features in the future.<\/p>\n<p>The main UI is a simple file uploader react component. The component source code is in the <code>components\/FileUpload.jsx<\/code> file.<br \/>\nThe <code>handleUpload<\/code> function will upload the file to the <code>\/api\/upload<\/code> route and will handle the data response for further processing.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-js\">jsx\nconst handleUpload = async () => {\n    if (!file) {\n        setUploadStatus('Please select a file first.')\n        return\n    }\n\n    const formData = new FormData()\n    formData.append('file', file)\n\n    try {\n        setUploadStatus('Uploading...')\n        const response = await fetch('\/api\/upload', {\n            method: 'POST',\n            body: formData,\n        })\n\n        if (!response.ok) {\n            throw new Error('Network response was not ok')\n        }\n\n        const data = await response.json()\n        setUploadData(data)\n        setUploadStatus('Upload successful!')\n    } catch (error) {\n        console.error('Error uploading file:', error)\n        setUploadStatus('Error uploading file. Please try again.')\n    }\n}<\/code><\/pre>\n<\/div>\n<h2>Demo<\/h2>\n<p><a href=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/demo.gif\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/griddb.net\/wp-content\/uploads\/2025\/04\/demo.gif\" alt=\"\" width=\"1708\" height=\"1080\" class=\"aligncenter size-full wp-image-31423\" \/><\/a><\/p>\n<p>Other than the details response data, the user can also download the generated narrative audio and the original video files.<\/p>\n<h2>Further Enhancements<\/h2>\n<p>These are enhancements recommendations to make this base project better and usable:<\/p>\n<ul>\n<li>Add video composer function which composes the generated narrative audio and the original video.<\/li>\n<li>Add longer video duration upload.<\/li>\n<li>Add a video user interface to show the final result.<\/li>\n<\/ul>\n<h2>Code Repository Link<\/h2>\n<p><a href=\"https:\/\/github.com\/griddbnet\/Blogs.git\">Github<\/a><\/p>\n<p>Use branch <code>narrate-ai<\/code><\/p>\n<p><code>$ git clone https:\/\/github.com\/griddbnet\/Blogs.git --branch narrate-ai<\/code><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a tech stack that includes Node.js for backend operations, GridDB for managing video metadata, OpenAI for AI-driven text and voice generation, and React for building an interactive frontend. Run The Application Clone the repository [&hellip;]<\/p>\n","protected":false},"author":41,"featured_media":52083,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-52082","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a\" \/>\n<meta property=\"og:url\" content=\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-04-02T07:00:00+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.griddb.net\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1792\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"griddb-admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"griddb-admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\"},\"author\":{\"name\":\"griddb-admin\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233\"},\"headline\":\"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos\",\"datePublished\":\"2025-04-02T07:00:00+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\"},\"wordCount\":1375,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\",\"url\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\",\"name\":\"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg\",\"datePublished\":\"2025-04-02T07:00:00+00:00\",\"description\":\"Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg\",\"contentUrl\":\"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg\",\"width\":1792,\"height\":1024},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.griddb.net\/en\/#website\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.griddb.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.griddb.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/www.griddb.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233\",\"name\":\"griddb-admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g\",\"caption\":\"griddb-admin\"},\"url\":\"https:\/\/www.griddb.net\/en\/author\/griddb-admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos | GridDB: Open Source Time Series Database for IoT","description":"Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/","og_locale":"en_US","og_type":"article","og_title":"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos | GridDB: Open Source Time Series Database for IoT","og_description":"Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a","og_url":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2025-04-02T07:00:00+00:00","og_image":[{"width":1792,"height":1024,"url":"https:\/\/www.griddb.net\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg","type":"image\/jpeg"}],"author":"griddb-admin","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"griddb-admin","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#article","isPartOf":{"@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/"},"author":{"name":"griddb-admin","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233"},"headline":"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos","datePublished":"2025-04-02T07:00:00+00:00","mainEntityOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/"},"wordCount":1375,"commentCount":0,"publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/","url":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/","name":"Leveraging AI to Generate Narrative Voices and Titles for Documentary Videos | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/www.griddb.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg","datePublished":"2025-04-02T07:00:00+00:00","description":"Introduction This blog focuses on leveraging AI to generate narrative voices and titles for documentary videos. We\u00e2\u0080\u0099ll explore implementing this using a","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb.net\/en\/blog\/leveraging-ai-to-generate-narrative-voices-and-titles-for-documentary-videos\/#primaryimage","url":"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg","contentUrl":"\/wp-content\/uploads\/2025\/12\/ai-narrative-cover.jpg","width":1792,"height":1024},{"@type":"WebSite","@id":"https:\/\/www.griddb.net\/en\/#website","url":"https:\/\/www.griddb.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/www.griddb.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.griddb.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.griddb.net\/en\/#organization","name":"Fixstars","url":"https:\/\/www.griddb.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/www.griddb.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233","name":"griddb-admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.griddb.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g","caption":"griddb-admin"},"url":"https:\/\/www.griddb.net\/en\/author\/griddb-admin\/"}]}},"_links":{"self":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52082","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/comments?post=52082"}],"version-history":[{"count":0,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/posts\/52082\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media\/52083"}],"wp:attachment":[{"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/media?parent=52082"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/categories?post=52082"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.griddb.net\/en\/wp-json\/wp\/v2\/tags?post=52082"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}