{"id":46701,"date":"2022-04-28T00:00:00","date_gmt":"2022-04-28T07:00:00","guid":{"rendered":"https:\/\/griddb-linux-hte8hndjf8cka8ht.westus-01.azurewebsites.net\/blog\/topic-modeling-with-lda-using-python-and-griddb\/"},"modified":"2025-11-13T12:56:01","modified_gmt":"2025-11-13T20:56:01","slug":"topic-modeling-with-lda-using-python-and-griddb","status":"publish","type":"post","link":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/","title":{"rendered":"Topic Modeling with LDA Using Python and GridDB"},"content":{"rendered":"<p>In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it is an unsupervised technique. It is increasingly important to categorize documents according to topics in this world filled with data. As an example, if a company receives hundreds of reviews, the company will need to know what categories of reviews are the most important and vice versa.<\/p>\n<p>As keywords, topics can be used to describe a document, for example, when we think of a topic related to economics, we think of stock market, USD, inflation, GPD, etc. Topic models are models that can automatically detect topics based on words appearing in a document. The problem we will tackle here is topic modeling.<\/p>\n<p><code>LDA - (Latent Dirichlet Allocation)<\/code><\/p>\n<p>The word latent means hidden, something that has yet to be discovered. As indicated by Dirichlet, the Dirichlet distribution is assumed to govern the distribution of topics and word patterns in documents. &#8220;Allocation&#8221; here refers to the process of giving something, in this case, topics.<\/p>\n<p>In this tutorial, we\u2019ll use the reviews in the following dataset to generate topics from the reviews. In this way, we can know about what users are talking about, what they are focusing on, and perhaps where app developers should make progress at.<\/p>\n<p>The outline of the tutorial is as follows:<\/p>\n<ol>\n<li>Prerequisites and Environment setup<\/li>\n<li>Dataset overview<\/li>\n<li>Importing required libraries<\/li>\n<li>Loading the dataset<\/li>\n<li>Data Cleaning and Preprocessing<\/li>\n<li>Building and Training a Machine Learning Model<\/li>\n<li>Conclusion<\/li>\n<\/ol>\n<h2>1&#46; Prerequisites and Environment setup<\/h2>\n<p>This tutorial is carried out in Anaconda Navigator (Python version \u2013 3.8.3) on Windows Operating System. The following packages need to be installed before you continue with the tutorial \u2013<\/p>\n<ol>\n<li>\n<p>Pandas<\/p>\n<\/li>\n<li>\n<p>NumPy<\/p>\n<\/li>\n<li>\n<p>Sklearn<\/p>\n<\/li>\n<li>\n<p>nltk<\/p>\n<\/li>\n<li>\n<p>re<\/p>\n<\/li>\n<li>\n<p>griddb_python<\/p>\n<\/li>\n<li>\n<p>spacy<\/p>\n<\/li>\n<li>\n<p>gensim<\/p>\n<\/li>\n<\/ol>\n<p>You can install these packages in Conda\u2019s virtual environment using <code>conda install package-name<\/code>. In case you are using Python directly via terminal\/command prompt, <code>pip install package-name<\/code> will do the work.<\/p>\n<h3>GridDB installation<\/h3>\n<p>While loading the dataset, this tutorial will cover two methods \u2013 Using GridDB as well as Using Pandas. To access GridDB using Python, the following packages also need to be installed beforehand:<\/p>\n<ol>\n<li><a href=\"https:\/\/github.com\/griddb\/c_client\">GridDB C-client<\/a><\/li>\n<li>SWIG (Simplified Wrapper and Interface Generator)<\/li>\n<li><a href=\"https:\/\/github.com\/griddb\/python_client\">GridDB Python Client<\/a><\/li>\n<\/ol>\n<h2>2&#46; Dataset Overview<\/h2>\n<p>Google Play Store Apps Dataset : Web scraped data of 10,000 Play Store apps for analyzing the Android market.<\/p>\n<p>It can be downloaded from here (<code>https:\/\/www.kaggle.com\/datasets\/lava18\/google-play-store-apps\/version\/5<\/code>).<\/p>\n<h2>3&#46; Importing Required Libraries<\/h2>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">import griddb_python as griddb\nimport numpy as np\nimport pandas as pd\nimport re, nltk, spacy, gensim\nfrom sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD\nfrom sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer\nfrom sklearn.model_selection import GridSearchCV\nfrom pprint import pprint<\/code><\/pre>\n<\/div>\n<h2>4&#46; Loading the Dataset<\/h2>\n<p>Let\u2019s proceed and load the dataset into our notebook.<\/p>\n<h3>4&#46;a Using GridDB<\/h3>\n<p>Toshiba GridDB\u2122 is a highly scalable NoSQL database best suited for IoT and Big Data. The foundation of GridDB\u2019s principles is based upon offering a versatile data store that is optimized for IoT, provides high scalability, tuned for high performance, and ensures high reliability.<\/p>\n<p>To store large amounts of data, a CSV file can be cumbersome. GridDB serves as a perfect alternative as it in open-source and a highly scalable database. GridDB is a scalable, in-memory, No SQL database which makes it easier for you to store large amounts of data. If you are new to GridDB, a tutorial on <a href=\"https:\/\/griddb.net\/en\/blog\/using-pandas-dataframes-with-griddb\/\">reading and writing to GridDB<\/a> can be useful.<\/p>\n<p>Assuming that you have already set up your database, we will now write the SQL query in python to load our dataset.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">sql_statement = ('SELECT * FROM googleplaystore_user_reviews')\ndataset = pd.read_sql_query(sql_statement, cont)<\/code><\/pre>\n<\/div>\n<p>Note that the <code>cont<\/code> variable has the container information where our data is stored. Replace the <code>bbc-text<\/code> with the name of your container. More info can be found in this tutorial <a href=\"https:\/\/griddb.net\/en\/blog\/using-pandas-dataframes-with-griddb\/\">reading and writing to GridDB<\/a>.<\/p>\n<p>When it comes to IoT and Big Data use cases, GridDB clearly stands out among other databases in the Relational and NoSQL space. Overall, GridDB offers multiple reliability features for mission-critical applications that require high availability and data retention.<\/p>\n<h3>4&#46;b Using pandas read_csv<\/h3>\n<p>In Python you need to give access to a file by opening it. You can do it by using the open() function. Open returns a file object, which has methods and attributes for getting information about and manipulating the opened file. Both of the above methods will lead to the same output as the data is loaded in the form of a pandas dataframe using either of the methods.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">df = pd.read_csv(\"googleplaystore_user_reviews.csv\")\ndf = df.dropna(subset=[\"Translated_Review\"])<\/code><\/pre>\n<\/div>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">df.head()<\/code><\/pre>\n<\/div>\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }<\/p>\n<p>    .dataframe tbody tr th {\n        vertical-align: top;\n    }<\/p>\n<p>    .dataframe thead th {\n        text-align: right;\n    }\n  <\/style>\n<table border=\"1\" class=\"dataframe\">\n<thead>\n<tr style=\"text-align: right;\">\n<th>\n        <\/th>\n<th>\n          App\n        <\/th>\n<th>\n          Translated_Review\n        <\/th>\n<th>\n          Sentiment\n        <\/th>\n<th>\n          Sentiment_Polarity\n        <\/th>\n<th>\n          Sentiment_Subjectivity\n        <\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<th>\n          0\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          I like eat delicious food. That&#8217;s I&#8217;m cooking &#8230;\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          1.00\n        <\/td>\n<td>\n          0.533333\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          1\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          This help eating healthy exercise regular basis\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          0.25\n        <\/td>\n<td>\n          0.288462\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          3\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          Works great especially going grocery store\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          0.40\n        <\/td>\n<td>\n          0.875000\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          4\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          Best idea us\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          1.00\n        <\/td>\n<td>\n          0.300000\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          5\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          Best way\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          1.00\n        <\/td>\n<td>\n          0.300000\n        <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>Once the dataset is loaded, let us now explore the dataset. We&#8217;ll print the first 5 rows of this dataset using head() function.<\/p>\n<h2>5&#46; Data Cleaning and Preprocessing<\/h2>\n<p>Cleaning the data by removing emails, new line character and quotes<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Convert to list\ndata = df.Translated_Review.values.tolist()\n# Remove Emails\ndata = [re.sub(r'S*@S*s?', '', sent) for sent in data]\n# Remove new line characters\ndata = [re.sub(r's+', ' ', sent) for sent in data]\n# Remove distracting single quotes\ndata = [re.sub(r\"'\", \"\", sent) for sent in data]\npprint(data[:1])<\/code><\/pre>\n<\/div>\n<pre><code>['I like eat delicious food. Thats Im cooking food myself, case \"10 Best '\n 'Foods\" helps lot, also \"Best Before (Shelf Life)\"']\n<\/code><\/pre>\n<p>We now need to tokenize each sentence into a list of words, eliminating all punctuation and unnecessary characters. Stemming refers to reducing a word to its word stem that attaches to prefixes and suffixes, or to the roots of words known as lemmas. The advantage of this is, we get to reduce the total number of unique words in the dictionary.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">def sent_to_words(sentences):\n    for sentence in sentences:\n        yield(gensim.utils.simple_preprocess(str(sentence), deacc=True))  # deacc=True removes punctuations\n        \ndef lemmatization(texts, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV']): #'NOUN', 'ADJ', 'VERB', 'ADV'\n    texts_out = []\n    for sent in texts:\n        doc = nlp(\" \".join(sent)) \n        texts_out.append(\" \".join([token.lemma_ if token.lemma_ not in ['-PRON-'] else '' for token in doc if token.pos_ in allowed_postags]))\n    return texts_out\n\ndata_words = list(sent_to_words(data))\nprint(data_words[:1])\n\nnlp = spacy.load(\"en_core_web_sm\", disable=[\"parser\", \"ner\"])\ndata_lemmatized = lemmatization(data_words, allowed_postags=[\"NOUN\", \"VERB\"]) #select noun and verb\nprint(data_lemmatized[:2])<\/code><\/pre>\n<\/div>\n<pre><code>[['like', 'eat', 'delicious', 'food', 'thats', 'im', 'cooking', 'food', 'myself', 'case', 'best', 'foods', 'helps', 'lot', 'also', 'best', 'before', 'shelf', 'life']]\n['eat food s m cook food case food help lot shelf life', 'help eat exercise basis']\n<\/code><\/pre>\n<p>As input, the LDA topic model algorithm requires a document word matrix. This is done using CountVectorizer.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">vectorizer = CountVectorizer(analyzer='word',       \n                             min_df=10,\n                             stop_words='english',             \n                             lowercase=True,                   \n                             token_pattern='[a-zA-Z0-9]{3,}') \ndata_vectorized = vectorizer.fit_transform(data_lemmatized)<\/code><\/pre>\n<\/div>\n<h2>6&#46; Machine Learning Model Building<\/h2>\n<p>We have everything we need to build a Latent Dirichlet Allocation (LDA) model. In order to construct the LDA model, let&#8217;s initialize one and then call fit_transform().<\/p>\n<p>Based on my prior knowledge about the dataset, I have set n_topics to 20 in this example. This number will be adjusted using grid search later on.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Build LDA Model\nlda_model = LatentDirichletAllocation(n_components=20,max_iter=10,learning_method='online',random_state=100,batch_size=128,evaluate_every = -1,n_jobs = -1,               )\nlda_output = lda_model.fit_transform(data_vectorized)\nprint(lda_model)  # Model attributes<\/code><\/pre>\n<\/div>\n<pre><code>LatentDirichletAllocation(learning_method='online', n_components=20, n_jobs=-1,\n                          random_state=100)\n<\/code><\/pre>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,\n evaluate_every=-1, learning_decay=0.7,\n learning_method=\"online\", learning_offset=10.0,\n max_doc_update_iter=100, max_iter=10, mean_change_tol=0.001,\n n_components=10, n_jobs=-1, perp_tol=0.1,\n random_state=100, topic_word_prior=None,\n total_samples=1000000.0, verbose=0)<\/code><\/pre>\n<\/div>\n<pre><code>LatentDirichletAllocation(learning_method='online', n_jobs=-1, random_state=100)\n<\/code><\/pre>\n<h3>Diagnose model performance with perplexity and log-likelihood<\/h3>\n<p>High log-likelihood and low perplexity (exp(-1. * log-likelihood per word)) are considered good models.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Log Likelyhood: Higher the better\nprint(\"Log Likelihood: \", lda_model.score(data_vectorized))\n# Perplexity: Lower the better. Perplexity = exp(-1. * log-likelihood per word)\nprint(\"Perplexity: \", lda_model.perplexity(data_vectorized))\n# See model parameters\npprint(lda_model.get_params())<\/code><\/pre>\n<\/div>\n<pre><code>Log Likelihood:  -2127623.32986425\nPerplexity:  1065.3272644698702\n{'batch_size': 128,\n 'doc_topic_prior': None,\n 'evaluate_every': -1,\n 'learning_decay': 0.7,\n 'learning_method': 'online',\n 'learning_offset': 10.0,\n 'max_doc_update_iter': 100,\n 'max_iter': 10,\n 'mean_change_tol': 0.001,\n 'n_components': 20,\n 'n_jobs': -1,\n 'perp_tol': 0.1,\n 'random_state': 100,\n 'topic_word_prior': None,\n 'total_samples': 1000000.0,\n 'verbose': 0}\n<\/code><\/pre>\n<h3>Use GridSearch to determine the best LDA model.<\/h3>\n<p>N_components (number of topics) is the most important tuning parameter for LDA models. Additionally, I will search learning_decay (which controls the learning rate) as well. In addition to these, learning_offset (downweigh early iterations. Should be > 1) and max_iter can also be considered as search parameters. This process can consume a lot of time and resources.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Define Search Param\nsearch_params = {'n_components': [10, 20], 'learning_decay': [0.5, 0.9]}\n# Init the Model\nlda = LatentDirichletAllocation(max_iter=5, learning_method='online', learning_offset=50.,random_state=0)\n# Init Grid Search Class\nmodel = GridSearchCV(lda, param_grid=search_params)\n# Do the Grid Search\nmodel.fit(data_vectorized)\nGridSearchCV(cv=None, error_score='raise',\n       estimator=LatentDirichletAllocation(batch_size=128, doc_topic_prior=None,\n             evaluate_every=-1, learning_decay=0.7, learning_method=None,\n             learning_offset=10.0, max_doc_update_iter=100, max_iter=10,\n             mean_change_tol=0.001, n_components=10, n_jobs=1, perp_tol=0.1, random_state=None,\n             topic_word_prior=None, total_samples=1000000.0, verbose=0),\n        n_jobs=1,\n       param_grid={'n_components': [10, 20], 'learning_decay': [0.5, 0.9]},\n       pre_dispatch='2*n_jobs', refit=True, return_train_score='warn',\n       scoring=None, verbose=0)<\/code><\/pre>\n<\/div>\n<pre><code>GridSearchCV(error_score='raise',\n             estimator=LatentDirichletAllocation(learning_method=None,\n                                                 n_jobs=1),\n             n_jobs=1,\n             param_grid={'learning_decay': [0.5, 0.9],\n                         'n_components': [10, 20]},\n             return_train_score='warn')\n<\/code><\/pre>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Best Model\nbest_lda_model = model.best_estimator_\n# Model Parameters\nprint(\"Best Model's Params: \", model.best_params_)\n# Log Likelihood Score\nprint(\"Best Log Likelihood Score: \", model.best_score_)\n# Perplexity\nprint(\"Model Perplexity: \", best_lda_model.perplexity(data_vectorized))<\/code><\/pre>\n<\/div>\n<pre><code>Best Model's Params:  {'learning_decay': 0.9, 'n_components': 10}\nBest Log Likelihood Score:  -432616.36669435585\nModel Perplexity:  764.0439579711182\n<\/code><\/pre>\n<p>A logical way to determine whether a document belongs to a particular topic is to see which topic contributed the most to it and then assign it to that topic. Below table highlighted all major topics and assigned the most dominant topic its own column.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Create Document \u2014 Topic Matrix\nlda_output = best_lda_model.transform(data_vectorized)\n\ntopicnames = [\"Topic\" + str(i) for i in range(best_lda_model.n_components)]\ndocnames = [\"Doc\" + str(i) for i in range(len(data))]\n\n# Make the pandas dataframe\ndf_document_topic = pd.DataFrame(np.round(lda_output, 2), columns=topicnames, index=docnames)\n# Get dominant topic for each document\ndominant_topic = np.argmax(df_document_topic.values, axis=1)\ndf_document_topic[\"dominant_topic\"] = dominant_topic\n# Styling\ndef color_green(val):\n color = \"green\" if val > .1 else \"black\"\n return \"color: {col}\".format(col=color)\ndef make_bold(val):\n weight = 700 if val > .1 else 400\n return \"font-weight: {weight}\".format(weight=weight)\n# Apply Style\ndf_document_topics = df_document_topic.head(15).style.applymap(color_green).applymap(make_bold)\ndf_document_topics<\/code><\/pre>\n<\/div>\n<style type=\"text\/css\">\n<h1>T_fb43c_row0_col0, #T_fb43c_row0_col1, #T_fb43c_row0_col2, #T_fb43c_row0_col4, #T_fb43c_row0_col5, #T_fb43c_row0_col6, #T_fb43c_row0_col7, #T_fb43c_row0_col8, #T_fb43c_row1_col0, #T_fb43c_row1_col1, #T_fb43c_row1_col2, #T_fb43c_row1_col4, #T_fb43c_row1_col5, #T_fb43c_row1_col6, #T_fb43c_row1_col7, #T_fb43c_row1_col8, #T_fb43c_row1_col9, #T_fb43c_row2_col0, #T_fb43c_row2_col1, #T_fb43c_row2_col2, #T_fb43c_row2_col3, #T_fb43c_row2_col4, #T_fb43c_row2_col5, #T_fb43c_row2_col7, #T_fb43c_row2_col8, #T_fb43c_row2_col9, #T_fb43c_row3_col1, #T_fb43c_row3_col2, #T_fb43c_row3_col3, #T_fb43c_row3_col4, #T_fb43c_row3_col5, #T_fb43c_row3_col6, #T_fb43c_row3_col7, #T_fb43c_row3_col8, #T_fb43c_row3_col9, #T_fb43c_row3_col10, #T_fb43c_row4_col0, #T_fb43c_row4_col1, #T_fb43c_row4_col2, #T_fb43c_row4_col4, #T_fb43c_row4_col5, #T_fb43c_row4_col6, #T_fb43c_row4_col7, #T_fb43c_row4_col8, #T_fb43c_row4_col9, #T_fb43c_row5_col0, #T_fb43c_row5_col1, #T_fb43c_row5_col2, #T_fb43c_row5_col3, #T_fb43c_row5_col4, #T_fb43c_row5_col5, #T_fb43c_row5_col6, #T_fb43c_row5_col7, #T_fb43c_row5_col8, #T_fb43c_row5_col9, #T_fb43c_row5_col10, #T_fb43c_row6_col0, #T_fb43c_row6_col1, #T_fb43c_row6_col3, #T_fb43c_row6_col4, #T_fb43c_row6_col5, #T_fb43c_row6_col6, #T_fb43c_row6_col7, #T_fb43c_row6_col8, #T_fb43c_row6_col9, #T_fb43c_row7_col0, #T_fb43c_row7_col1, #T_fb43c_row7_col2, #T_fb43c_row7_col3, #T_fb43c_row7_col4, #T_fb43c_row7_col5, #T_fb43c_row7_col7, #T_fb43c_row7_col8, #T_fb43c_row8_col0, #T_fb43c_row8_col1, #T_fb43c_row8_col2, #T_fb43c_row8_col3, #T_fb43c_row8_col4, #T_fb43c_row8_col5, #T_fb43c_row8_col6, #T_fb43c_row8_col7, #T_fb43c_row8_col8, #T_fb43c_row8_col9, #T_fb43c_row8_col10, #T_fb43c_row9_col0, #T_fb43c_row9_col1, #T_fb43c_row9_col2, #T_fb43c_row9_col3, #T_fb43c_row9_col6, #T_fb43c_row9_col7, #T_fb43c_row9_col8, #T_fb43c_row9_col9, #T_fb43c_row10_col1, #T_fb43c_row10_col2, #T_fb43c_row10_col3, #T_fb43c_row10_col4, #T_fb43c_row10_col5, #T_fb43c_row10_col6, #T_fb43c_row10_col7, #T_fb43c_row10_col8, #T_fb43c_row10_col9, #T_fb43c_row10_col10, #T_fb43c_row11_col0, #T_fb43c_row11_col1, #T_fb43c_row11_col4, #T_fb43c_row11_col5, #T_fb43c_row11_col6, #T_fb43c_row11_col7, #T_fb43c_row11_col8, #T_fb43c_row11_col9, #T_fb43c_row12_col0, #T_fb43c_row12_col1, #T_fb43c_row12_col2, #T_fb43c_row12_col4, #T_fb43c_row12_col6, #T_fb43c_row12_col7, #T_fb43c_row12_col8, #T_fb43c_row12_col9, #T_fb43c_row13_col0, #T_fb43c_row13_col1, #T_fb43c_row13_col2, #T_fb43c_row13_col4, #T_fb43c_row13_col5, #T_fb43c_row13_col6, #T_fb43c_row13_col7, #T_fb43c_row13_col8, #T_fb43c_row14_col1, #T_fb43c_row14_col2, #T_fb43c_row14_col3, #T_fb43c_row14_col4, #T_fb43c_row14_col5, #T_fb43c_row14_col6, #T_fb43c_row14_col7, #T_fb43c_row14_col8, #T_fb43c_row14_col9, #T_fb43c_row14_col10 {color: black; font-weight: 400; }<\/h1>\n<h1>T_fb43c_row0_col3, #T_fb43c_row0_col9, #T_fb43c_row0_col10, #T_fb43c_row1_col3, #T_fb43c_row1_col10, #T_fb43c_row2_col6, #T_fb43c_row2_col10, #T_fb43c_row3_col0, #T_fb43c_row4_col3, #T_fb43c_row4_col10, #T_fb43c_row6_col2, #T_fb43c_row6_col10, #T_fb43c_row7_col6, #T_fb43c_row7_col9, #T_fb43c_row7_col10, #T_fb43c_row9_col4, #T_fb43c_row9_col5, #T_fb43c_row9_col10, #T_fb43c_row10_col0, #T_fb43c_row11_col2, #T_fb43c_row11_col3, #T_fb43c_row11_col10, #T_fb43c_row12_col3, #T_fb43c_row12_col5, #T_fb43c_row12_col10, #T_fb43c_row13_col3, #T_fb43c_row13_col9, #T_fb43c_row13_col10, #T_fb43c_row14_col0 {color: green; font-weight: 700; }<\/h1>\n<\/style>\n<table id=\"T_fb43c\">\n<thead>\n<tr>\n<th class=\"blank level0\">\n        \u00a0\n      <\/th>\n<th id=\"T_fb43c_level0_col0\" class=\"col_heading level0 col0\">\n        Topic0\n      <\/th>\n<th id=\"T_fb43c_level0_col1\" class=\"col_heading level0 col1\">\n        Topic1\n      <\/th>\n<th id=\"T_fb43c_level0_col2\" class=\"col_heading level0 col2\">\n        Topic2\n      <\/th>\n<th id=\"T_fb43c_level0_col3\" class=\"col_heading level0 col3\">\n        Topic3\n      <\/th>\n<th id=\"T_fb43c_level0_col4\" class=\"col_heading level0 col4\">\n        Topic4\n      <\/th>\n<th id=\"T_fb43c_level0_col5\" class=\"col_heading level0 col5\">\n        Topic5\n      <\/th>\n<th id=\"T_fb43c_level0_col6\" class=\"col_heading level0 col6\">\n        Topic6\n      <\/th>\n<th id=\"T_fb43c_level0_col7\" class=\"col_heading level0 col7\">\n        Topic7\n      <\/th>\n<th id=\"T_fb43c_level0_col8\" class=\"col_heading level0 col8\">\n        Topic8\n      <\/th>\n<th id=\"T_fb43c_level0_col9\" class=\"col_heading level0 col9\">\n        Topic9\n      <\/th>\n<th id=\"T_fb43c_level0_col10\" class=\"col_heading level0 col10\">\n        dominant_topic\n      <\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<th id=\"T_fb43c_level0_row0\" class=\"row_heading level0 row0\">\n        Doc0\n      <\/th>\n<td id=\"T_fb43c_row0_col0\" class=\"data row0 col0\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col1\" class=\"data row0 col1\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col2\" class=\"data row0 col2\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col3\" class=\"data row0 col3\">\n        0.760000\n      <\/td>\n<td id=\"T_fb43c_row0_col4\" class=\"data row0 col4\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col5\" class=\"data row0 col5\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col6\" class=\"data row0 col6\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col7\" class=\"data row0 col7\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col8\" class=\"data row0 col8\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row0_col9\" class=\"data row0 col9\">\n        0.160000\n      <\/td>\n<td id=\"T_fb43c_row0_col10\" class=\"data row0 col10\">\n        3\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row1\" class=\"row_heading level0 row1\">\n        Doc1\n      <\/th>\n<td id=\"T_fb43c_row1_col0\" class=\"data row1 col0\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col1\" class=\"data row1 col1\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col2\" class=\"data row1 col2\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col3\" class=\"data row1 col3\">\n        0.820000\n      <\/td>\n<td id=\"T_fb43c_row1_col4\" class=\"data row1 col4\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col5\" class=\"data row1 col5\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col6\" class=\"data row1 col6\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col7\" class=\"data row1 col7\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col8\" class=\"data row1 col8\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col9\" class=\"data row1 col9\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row1_col10\" class=\"data row1 col10\">\n        3\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row2\" class=\"row_heading level0 row2\">\n        Doc2\n      <\/th>\n<td id=\"T_fb43c_row2_col0\" class=\"data row2 col0\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col1\" class=\"data row2 col1\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col2\" class=\"data row2 col2\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col3\" class=\"data row2 col3\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col4\" class=\"data row2 col4\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col5\" class=\"data row2 col5\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col6\" class=\"data row2 col6\">\n        0.770000\n      <\/td>\n<td id=\"T_fb43c_row2_col7\" class=\"data row2 col7\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col8\" class=\"data row2 col8\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col9\" class=\"data row2 col9\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row2_col10\" class=\"data row2 col10\">\n        6\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row3\" class=\"row_heading level0 row3\">\n        Doc3\n      <\/th>\n<td id=\"T_fb43c_row3_col0\" class=\"data row3 col0\">\n        0.550000\n      <\/td>\n<td id=\"T_fb43c_row3_col1\" class=\"data row3 col1\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col2\" class=\"data row3 col2\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col3\" class=\"data row3 col3\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col4\" class=\"data row3 col4\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col5\" class=\"data row3 col5\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col6\" class=\"data row3 col6\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col7\" class=\"data row3 col7\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col8\" class=\"data row3 col8\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col9\" class=\"data row3 col9\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row3_col10\" class=\"data row3 col10\">\n        0\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row4\" class=\"row_heading level0 row4\">\n        Doc4\n      <\/th>\n<td id=\"T_fb43c_row4_col0\" class=\"data row4 col0\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col1\" class=\"data row4 col1\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col2\" class=\"data row4 col2\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col3\" class=\"data row4 col3\">\n        0.550000\n      <\/td>\n<td id=\"T_fb43c_row4_col4\" class=\"data row4 col4\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col5\" class=\"data row4 col5\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col6\" class=\"data row4 col6\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col7\" class=\"data row4 col7\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col8\" class=\"data row4 col8\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col9\" class=\"data row4 col9\">\n        0.050000\n      <\/td>\n<td id=\"T_fb43c_row4_col10\" class=\"data row4 col10\">\n        3\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row5\" class=\"row_heading level0 row5\">\n        Doc5\n      <\/th>\n<td id=\"T_fb43c_row5_col0\" class=\"data row5 col0\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col1\" class=\"data row5 col1\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col2\" class=\"data row5 col2\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col3\" class=\"data row5 col3\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col4\" class=\"data row5 col4\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col5\" class=\"data row5 col5\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col6\" class=\"data row5 col6\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col7\" class=\"data row5 col7\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col8\" class=\"data row5 col8\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col9\" class=\"data row5 col9\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row5_col10\" class=\"data row5 col10\">\n        0\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row6\" class=\"row_heading level0 row6\">\n        Doc6\n      <\/th>\n<td id=\"T_fb43c_row6_col0\" class=\"data row6 col0\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col1\" class=\"data row6 col1\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col2\" class=\"data row6 col2\">\n        0.700000\n      <\/td>\n<td id=\"T_fb43c_row6_col3\" class=\"data row6 col3\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col4\" class=\"data row6 col4\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col5\" class=\"data row6 col5\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col6\" class=\"data row6 col6\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col7\" class=\"data row6 col7\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col8\" class=\"data row6 col8\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col9\" class=\"data row6 col9\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row6_col10\" class=\"data row6 col10\">\n        2\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row7\" class=\"row_heading level0 row7\">\n        Doc7\n      <\/th>\n<td id=\"T_fb43c_row7_col0\" class=\"data row7 col0\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col1\" class=\"data row7 col1\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col2\" class=\"data row7 col2\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col3\" class=\"data row7 col3\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col4\" class=\"data row7 col4\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col5\" class=\"data row7 col5\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col6\" class=\"data row7 col6\">\n        0.250000\n      <\/td>\n<td id=\"T_fb43c_row7_col7\" class=\"data row7 col7\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col8\" class=\"data row7 col8\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row7_col9\" class=\"data row7 col9\">\n        0.550000\n      <\/td>\n<td id=\"T_fb43c_row7_col10\" class=\"data row7 col10\">\n        9\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row8\" class=\"row_heading level0 row8\">\n        Doc8\n      <\/th>\n<td id=\"T_fb43c_row8_col0\" class=\"data row8 col0\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col1\" class=\"data row8 col1\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col2\" class=\"data row8 col2\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col3\" class=\"data row8 col3\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col4\" class=\"data row8 col4\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col5\" class=\"data row8 col5\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col6\" class=\"data row8 col6\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col7\" class=\"data row8 col7\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col8\" class=\"data row8 col8\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col9\" class=\"data row8 col9\">\n        0.100000\n      <\/td>\n<td id=\"T_fb43c_row8_col10\" class=\"data row8 col10\">\n        0\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row9\" class=\"row_heading level0 row9\">\n        Doc9\n      <\/th>\n<td id=\"T_fb43c_row9_col0\" class=\"data row9 col0\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col1\" class=\"data row9 col1\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col2\" class=\"data row9 col2\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col3\" class=\"data row9 col3\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col4\" class=\"data row9 col4\">\n        0.790000\n      <\/td>\n<td id=\"T_fb43c_row9_col5\" class=\"data row9 col5\">\n        0.120000\n      <\/td>\n<td id=\"T_fb43c_row9_col6\" class=\"data row9 col6\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col7\" class=\"data row9 col7\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col8\" class=\"data row9 col8\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col9\" class=\"data row9 col9\">\n        0.010000\n      <\/td>\n<td id=\"T_fb43c_row9_col10\" class=\"data row9 col10\">\n        4\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row10\" class=\"row_heading level0 row10\">\n        Doc10\n      <\/th>\n<td id=\"T_fb43c_row10_col0\" class=\"data row10 col0\">\n        0.850000\n      <\/td>\n<td id=\"T_fb43c_row10_col1\" class=\"data row10 col1\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col2\" class=\"data row10 col2\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col3\" class=\"data row10 col3\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col4\" class=\"data row10 col4\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col5\" class=\"data row10 col5\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col6\" class=\"data row10 col6\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col7\" class=\"data row10 col7\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col8\" class=\"data row10 col8\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col9\" class=\"data row10 col9\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row10_col10\" class=\"data row10 col10\">\n        0\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row11\" class=\"row_heading level0 row11\">\n        Doc11\n      <\/th>\n<td id=\"T_fb43c_row11_col0\" class=\"data row11 col0\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col1\" class=\"data row11 col1\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col2\" class=\"data row11 col2\">\n        0.220000\n      <\/td>\n<td id=\"T_fb43c_row11_col3\" class=\"data row11 col3\">\n        0.620000\n      <\/td>\n<td id=\"T_fb43c_row11_col4\" class=\"data row11 col4\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col5\" class=\"data row11 col5\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col6\" class=\"data row11 col6\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col7\" class=\"data row11 col7\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col8\" class=\"data row11 col8\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col9\" class=\"data row11 col9\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row11_col10\" class=\"data row11 col10\">\n        3\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row12\" class=\"row_heading level0 row12\">\n        Doc12\n      <\/th>\n<td id=\"T_fb43c_row12_col0\" class=\"data row12 col0\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col1\" class=\"data row12 col1\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col2\" class=\"data row12 col2\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col3\" class=\"data row12 col3\">\n        0.520000\n      <\/td>\n<td id=\"T_fb43c_row12_col4\" class=\"data row12 col4\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col5\" class=\"data row12 col5\">\n        0.270000\n      <\/td>\n<td id=\"T_fb43c_row12_col6\" class=\"data row12 col6\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col7\" class=\"data row12 col7\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col8\" class=\"data row12 col8\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col9\" class=\"data row12 col9\">\n        0.030000\n      <\/td>\n<td id=\"T_fb43c_row12_col10\" class=\"data row12 col10\">\n        3\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row13\" class=\"row_heading level0 row13\">\n        Doc13\n      <\/th>\n<td id=\"T_fb43c_row13_col0\" class=\"data row13 col0\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col1\" class=\"data row13 col1\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col2\" class=\"data row13 col2\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col3\" class=\"data row13 col3\">\n        0.380000\n      <\/td>\n<td id=\"T_fb43c_row13_col4\" class=\"data row13 col4\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col5\" class=\"data row13 col5\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col6\" class=\"data row13 col6\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col7\" class=\"data row13 col7\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col8\" class=\"data row13 col8\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row13_col9\" class=\"data row13 col9\">\n        0.460000\n      <\/td>\n<td id=\"T_fb43c_row13_col10\" class=\"data row13 col10\">\n        9\n      <\/td>\n<\/tr>\n<tr>\n<th id=\"T_fb43c_level0_row14\" class=\"row_heading level0 row14\">\n        Doc14\n      <\/th>\n<td id=\"T_fb43c_row14_col0\" class=\"data row14 col0\">\n        0.850000\n      <\/td>\n<td id=\"T_fb43c_row14_col1\" class=\"data row14 col1\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col2\" class=\"data row14 col2\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col3\" class=\"data row14 col3\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col4\" class=\"data row14 col4\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col5\" class=\"data row14 col5\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col6\" class=\"data row14 col6\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col7\" class=\"data row14 col7\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col8\" class=\"data row14 col8\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col9\" class=\"data row14 col9\">\n        0.020000\n      <\/td>\n<td id=\"T_fb43c_row14_col10\" class=\"data row14 col10\">\n        0\n      <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Topic-Keyword Matrix\ndf_topic_keywords = pd.DataFrame(best_lda_model.components_)\n# Assign Column and Index\ndf_topic_keywords.columns = vectorizer.get_feature_names_out()\ndf_topic_keywords.index = topicnames\n# View\ndf_topic_keywords.head()<\/code><\/pre>\n<\/div>\n<div style=\"    overflow-x: scroll;white-space: nowrap;\">\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }<\/p>\n<p>    .dataframe tbody tr th {\n        vertical-align: top;\n    }<\/p>\n<p>    .dataframe thead th {\n        text-align: right;\n    }\n  <\/style>\n<table border=\"1\" class=\"dataframe\">\n<thead>\n<tr style=\"text-align: right;\">\n<th>\n        <\/th>\n<th>\n          aap\n        <\/th>\n<th>\n          abandon\n        <\/th>\n<th>\n          ability\n        <\/th>\n<th>\n          abuse\n        <\/th>\n<th>\n          accept\n        <\/th>\n<th>\n          access\n        <\/th>\n<th>\n          accessory\n        <\/th>\n<th>\n          accident\n        <\/th>\n<th>\n          accommodation\n        <\/th>\n<th>\n          accomplish\n        <\/th>\n<th>\n          &#8230;\n        <\/th>\n<th>\n          yardage\n        <\/th>\n<th>\n          yay\n        <\/th>\n<th>\n          year\n        <\/th>\n<th>\n          yesterday\n        <\/th>\n<th>\n          yoga\n        <\/th>\n<th>\n          youtube\n        <\/th>\n<th>\n          zip\n        <\/th>\n<th>\n          zombie\n        <\/th>\n<th>\n          zone\n        <\/th>\n<th>\n          zoom\n        <\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<th>\n          Topic0\n        <\/th>\n<td>\n          0.102649\n        <\/td>\n<td>\n          0.102871\n        <\/td>\n<td>\n          56.001281\n        <\/td>\n<td>\n          0.103583\n        <\/td>\n<td>\n          0.107420\n        <\/td>\n<td>\n          0.132561\n        <\/td>\n<td>\n          12.712732\n        <\/td>\n<td>\n          0.102863\n        <\/td>\n<td>\n          0.102585\n        <\/td>\n<td>\n          0.102685\n        <\/td>\n<td>\n          &#8230;\n        <\/td>\n<td>\n          8.642076\n        <\/td>\n<td>\n          0.102612\n        <\/td>\n<td>\n          153.232551\n        <\/td>\n<td>\n          0.102522\n        <\/td>\n<td>\n          0.496217\n        <\/td>\n<td>\n          0.106992\n        <\/td>\n<td>\n          0.211912\n        <\/td>\n<td>\n          0.140018\n        <\/td>\n<td>\n          0.177780\n        <\/td>\n<td>\n          0.104975\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic1\n        <\/th>\n<td>\n          0.101828\n        <\/td>\n<td>\n          0.102233\n        <\/td>\n<td>\n          1.148602\n        <\/td>\n<td>\n          0.102127\n        <\/td>\n<td>\n          0.103543\n        <\/td>\n<td>\n          558.310169\n        <\/td>\n<td>\n          0.102997\n        <\/td>\n<td>\n          2.594090\n        <\/td>\n<td>\n          0.102651\n        <\/td>\n<td>\n          0.110221\n        <\/td>\n<td>\n          &#8230;\n        <\/td>\n<td>\n          0.525860\n        <\/td>\n<td>\n          0.102106\n        <\/td>\n<td>\n          6.075186\n        <\/td>\n<td>\n          20.135445\n        <\/td>\n<td>\n          0.102284\n        <\/td>\n<td>\n          0.106246\n        <\/td>\n<td>\n          0.103076\n        <\/td>\n<td>\n          0.108334\n        <\/td>\n<td>\n          0.122234\n        <\/td>\n<td>\n          0.102741\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic2\n        <\/th>\n<td>\n          0.103196\n        <\/td>\n<td>\n          0.107593\n        <\/td>\n<td>\n          0.107848\n        <\/td>\n<td>\n          0.104019\n        <\/td>\n<td>\n          0.103053\n        <\/td>\n<td>\n          0.126004\n        <\/td>\n<td>\n          0.106085\n        <\/td>\n<td>\n          0.117876\n        <\/td>\n<td>\n          9.979474\n        <\/td>\n<td>\n          0.108507\n        <\/td>\n<td>\n          &#8230;\n        <\/td>\n<td>\n          0.366334\n        <\/td>\n<td>\n          0.102367\n        <\/td>\n<td>\n          5.066123\n        <\/td>\n<td>\n          0.103931\n        <\/td>\n<td>\n          31.039314\n        <\/td>\n<td>\n          0.107878\n        <\/td>\n<td>\n          0.102303\n        <\/td>\n<td>\n          0.102200\n        <\/td>\n<td>\n          0.128228\n        <\/td>\n<td>\n          0.104907\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic3\n        <\/th>\n<td>\n          0.102564\n        <\/td>\n<td>\n          0.107112\n        <\/td>\n<td>\n          2.022397\n        <\/td>\n<td>\n          12.968156\n        <\/td>\n<td>\n          0.102692\n        <\/td>\n<td>\n          0.130003\n        <\/td>\n<td>\n          0.113959\n        <\/td>\n<td>\n          1.838441\n        <\/td>\n<td>\n          0.101579\n        <\/td>\n<td>\n          8.345948\n        <\/td>\n<td>\n          &#8230;\n        <\/td>\n<td>\n          0.105286\n        <\/td>\n<td>\n          0.103549\n        <\/td>\n<td>\n          7.478397\n        <\/td>\n<td>\n          0.104231\n        <\/td>\n<td>\n          24.234774\n        <\/td>\n<td>\n          0.118099\n        <\/td>\n<td>\n          0.123212\n        <\/td>\n<td>\n          0.128494\n        <\/td>\n<td>\n          29.086953\n        <\/td>\n<td>\n          0.103109\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic4\n        <\/th>\n<td>\n          0.102634\n        <\/td>\n<td>\n          0.102345\n        <\/td>\n<td>\n          76.332226\n        <\/td>\n<td>\n          0.102486\n        <\/td>\n<td>\n          41.139452\n        <\/td>\n<td>\n          0.118419\n        <\/td>\n<td>\n          0.115930\n        <\/td>\n<td>\n          0.142032\n        <\/td>\n<td>\n          0.103316\n        <\/td>\n<td>\n          0.104292\n        <\/td>\n<td>\n          &#8230;\n        <\/td>\n<td>\n          0.409518\n        <\/td>\n<td>\n          0.102979\n        <\/td>\n<td>\n          737.692499\n        <\/td>\n<td>\n          0.600751\n        <\/td>\n<td>\n          0.116092\n        <\/td>\n<td>\n          0.102262\n        <\/td>\n<td>\n          0.108881\n        <\/td>\n<td>\n          0.102011\n        <\/td>\n<td>\n          0.115584\n        <\/td>\n<td>\n          0.513135\n        <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>\n    5 rows \u00d7 2273 columns\n  <\/p>\n<\/div>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Show top n keywords for each topic\ndef show_topics(vectorizer=vectorizer, lda_model=lda_model, n_words=20):\n    keywords = np.array(vectorizer.get_feature_names_out())\n    topic_keywords = []\n    for topic_weights in lda_model.components_:\n        top_keyword_locs = (-topic_weights).argsort()[:n_words]\n        topic_keywords.append(keywords.take(top_keyword_locs))\n    return topic_keywords\ntopic_keywords = show_topics(vectorizer=vectorizer, lda_model=best_lda_model, n_words=15)\n# Topic - Keywords Dataframe\ndf_topic_keywords = pd.DataFrame(topic_keywords)\ndf_topic_keywords.columns = ['Word '+str(i) for i in range(df_topic_keywords.shape[1])]\ndf_topic_keywords.index = ['Topic '+str(i) for i in range(df_topic_keywords.shape[0])]\ndf_topic_keywords<\/code><\/pre>\n<\/div>\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }<\/p>\n<p>    .dataframe tbody tr th {\n        vertical-align: top;\n    }<\/p>\n<p>    .dataframe thead th {\n        text-align: right;\n    }\n  <\/style>\n<table border=\"1\" class=\"dataframe\">\n<thead>\n<tr style=\"text-align: right;\">\n<th>\n        <\/th>\n<th>\n          Word 0\n        <\/th>\n<th>\n          Word 1\n        <\/th>\n<th>\n          Word 2\n        <\/th>\n<th>\n          Word 3\n        <\/th>\n<th>\n          Word 4\n        <\/th>\n<th>\n          Word 5\n        <\/th>\n<th>\n          Word 6\n        <\/th>\n<th>\n          Word 7\n        <\/th>\n<th>\n          Word 8\n        <\/th>\n<th>\n          Word 9\n        <\/th>\n<th>\n          Word 10\n        <\/th>\n<th>\n          Word 11\n        <\/th>\n<th>\n          Word 12\n        <\/th>\n<th>\n          Word 13\n        <\/th>\n<th>\n          Word 14\n        <\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<th>\n          Topic 0\n        <\/th>\n<td>\n          phone\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          add\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          think\n        <\/td>\n<td>\n          picture\n        <\/td>\n<td>\n          version\n        <\/td>\n<td>\n          month\n        <\/td>\n<td>\n          work\n        <\/td>\n<td>\n          minute\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          look\n        <\/td>\n<td>\n          list\n        <\/td>\n<td>\n          home\n        <\/td>\n<td>\n          number\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 1\n        <\/th>\n<td>\n          email\n        <\/td>\n<td>\n          send\n        <\/td>\n<td>\n          news\n        <\/td>\n<td>\n          check\n        <\/td>\n<td>\n          price\n        <\/td>\n<td>\n          bug\n        <\/td>\n<td>\n          access\n        <\/td>\n<td>\n          color\n        <\/td>\n<td>\n          customer\n        <\/td>\n<td>\n          order\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          message\n        <\/td>\n<td>\n          service\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          camera\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 2\n        <\/th>\n<td>\n          love\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          look\n        <\/td>\n<td>\n          date\n        <\/td>\n<td>\n          book\n        <\/td>\n<td>\n          lose\n        <\/td>\n<td>\n          guy\n        <\/td>\n<td>\n          family\n        <\/td>\n<td>\n          switch\n        <\/td>\n<td>\n          music\n        <\/td>\n<td>\n          recipe\n        <\/td>\n<td>\n          information\n        <\/td>\n<td>\n          quality\n        <\/td>\n<td>\n          feel\n        <\/td>\n<td>\n          change\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 3\n        <\/th>\n<td>\n          fix\n        <\/td>\n<td>\n          way\n        <\/td>\n<td>\n          day\n        <\/td>\n<td>\n          money\n        <\/td>\n<td>\n          need\n        <\/td>\n<td>\n          buy\n        <\/td>\n<td>\n          star\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          lot\n        <\/td>\n<td>\n          start\n        <\/td>\n<td>\n          spend\n        <\/td>\n<td>\n          help\n        <\/td>\n<td>\n          rate\n        <\/td>\n<td>\n          like\n        <\/td>\n<td>\n          track\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 4\n        <\/th>\n<td>\n          use\n        <\/td>\n<td>\n          pay\n        <\/td>\n<td>\n          want\n        <\/td>\n<td>\n          account\n        <\/td>\n<td>\n          user\n        <\/td>\n<td>\n          year\n        <\/td>\n<td>\n          fix\n        <\/td>\n<td>\n          note\n        <\/td>\n<td>\n          log\n        <\/td>\n<td>\n          error\n        <\/td>\n<td>\n          recommend\n        <\/td>\n<td>\n          problem\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          star\n        <\/td>\n<td>\n          option\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 5\n        <\/th>\n<td>\n          feature\n        <\/td>\n<td>\n          thank\n        <\/td>\n<td>\n          hate\n        <\/td>\n<td>\n          learn\n        <\/td>\n<td>\n          photo\n        <\/td>\n<td>\n          text\n        <\/td>\n<td>\n          job\n        <\/td>\n<td>\n          search\n        <\/td>\n<td>\n          suck\n        <\/td>\n<td>\n          help\n        <\/td>\n<td>\n          tab\n        <\/td>\n<td>\n          tool\n        <\/td>\n<td>\n          weight\n        <\/td>\n<td>\n          weather\n        <\/td>\n<td>\n          group\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 6\n        <\/th>\n<td>\n          work\n        <\/td>\n<td>\n          screen\n        <\/td>\n<td>\n          video\n        <\/td>\n<td>\n          need\n        <\/td>\n<td>\n          notification\n        <\/td>\n<td>\n          device\n        <\/td>\n<td>\n          wish\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          option\n        <\/td>\n<td>\n          set\n        <\/td>\n<td>\n          store\n        <\/td>\n<td>\n          choose\n        <\/td>\n<td>\n          type\n        <\/td>\n<td>\n          food\n        <\/td>\n<td>\n          item\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 7\n        <\/th>\n<td>\n          game\n        <\/td>\n<td>\n          play\n        <\/td>\n<td>\n          level\n        <\/td>\n<td>\n          fun\n        <\/td>\n<td>\n          player\n        <\/td>\n<td>\n          watch\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          enjoy\n        <\/td>\n<td>\n          start\n        <\/td>\n<td>\n          graphic\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          win\n        <\/td>\n<td>\n          character\n        <\/td>\n<td>\n          score\n        <\/td>\n<td>\n          lose\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 8\n        <\/th>\n<td>\n          time\n        <\/td>\n<td>\n          update\n        <\/td>\n<td>\n          try\n        <\/td>\n<td>\n          review\n        <\/td>\n<td>\n          crash\n        <\/td>\n<td>\n          know\n        <\/td>\n<td>\n          let\n        <\/td>\n<td>\n          problem\n        <\/td>\n<td>\n          page\n        <\/td>\n<td>\n          load\n        <\/td>\n<td>\n          waste\n        <\/td>\n<td>\n          want\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          need\n        <\/td>\n<td>\n          version\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 9\n        <\/th>\n<td>\n          say\n        <\/td>\n<td>\n          card\n        <\/td>\n<td>\n          people\n        <\/td>\n<td>\n          time\n        <\/td>\n<td>\n          work\n        <\/td>\n<td>\n          tell\n        <\/td>\n<td>\n          download\n        <\/td>\n<td>\n          help\n        <\/td>\n<td>\n          datum\n        <\/td>\n<td>\n          issue\n        <\/td>\n<td>\n          happen\n        <\/td>\n<td>\n          support\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          know\n        <\/td>\n<td>\n          want\n        <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>In this step, we need to determine topics based on their key words. For topic 3, people mention &#8220;card&#8221;, &#8220;video&#8221;, and &#8220;spend&#8221;, so we conclude that this topic is about &#8220;Card Payment&#8221;. Next, add the 10 topics we inferred to the dataframe.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">Topics = [\"Update Version\/Fix Crash Problem\",\"Download\/Internet Access\",\"Learn and Share\",\"Card Payment\",\"Notification\/Support\", \n          \"Account Problem\", \"Device\/Design\/Password\", \"Language\/Recommend\/Screen Size\", \"Graphic\/ Game Design\/ Level and Coin\", \"Photo\/Search\"]\ndf_topic_keywords[\"Topics\"]=Topics\ndf_topic_keywords<\/code><\/pre>\n<\/div>\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }<\/p>\n<p>    .dataframe tbody tr th {\n        vertical-align: top;\n    }<\/p>\n<p>    .dataframe thead th {\n        text-align: right;\n    }\n  <\/style>\n<table border=\"1\" class=\"dataframe\">\n<thead>\n<tr style=\"text-align: right;\">\n<th>\n        <\/th>\n<th>\n          Word 0\n        <\/th>\n<th>\n          Word 1\n        <\/th>\n<th>\n          Word 2\n        <\/th>\n<th>\n          Word 3\n        <\/th>\n<th>\n          Word 4\n        <\/th>\n<th>\n          Word 5\n        <\/th>\n<th>\n          Word 6\n        <\/th>\n<th>\n          Word 7\n        <\/th>\n<th>\n          Word 8\n        <\/th>\n<th>\n          Word 9\n        <\/th>\n<th>\n          Word 10\n        <\/th>\n<th>\n          Word 11\n        <\/th>\n<th>\n          Word 12\n        <\/th>\n<th>\n          Word 13\n        <\/th>\n<th>\n          Word 14\n        <\/th>\n<th>\n          Topics\n        <\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<th>\n          Topic 0\n        <\/th>\n<td>\n          phone\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          add\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          think\n        <\/td>\n<td>\n          picture\n        <\/td>\n<td>\n          version\n        <\/td>\n<td>\n          month\n        <\/td>\n<td>\n          work\n        <\/td>\n<td>\n          minute\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          look\n        <\/td>\n<td>\n          list\n        <\/td>\n<td>\n          home\n        <\/td>\n<td>\n          number\n        <\/td>\n<td>\n          Update Version\/Fix Crash Problem\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 1\n        <\/th>\n<td>\n          email\n        <\/td>\n<td>\n          send\n        <\/td>\n<td>\n          news\n        <\/td>\n<td>\n          check\n        <\/td>\n<td>\n          price\n        <\/td>\n<td>\n          bug\n        <\/td>\n<td>\n          access\n        <\/td>\n<td>\n          color\n        <\/td>\n<td>\n          customer\n        <\/td>\n<td>\n          order\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          message\n        <\/td>\n<td>\n          service\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          camera\n        <\/td>\n<td>\n          Download\/Internet Access\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 2\n        <\/th>\n<td>\n          love\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          look\n        <\/td>\n<td>\n          date\n        <\/td>\n<td>\n          book\n        <\/td>\n<td>\n          lose\n        <\/td>\n<td>\n          guy\n        <\/td>\n<td>\n          family\n        <\/td>\n<td>\n          switch\n        <\/td>\n<td>\n          music\n        <\/td>\n<td>\n          recipe\n        <\/td>\n<td>\n          information\n        <\/td>\n<td>\n          quality\n        <\/td>\n<td>\n          feel\n        <\/td>\n<td>\n          change\n        <\/td>\n<td>\n          Learn and Share\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 3\n        <\/th>\n<td>\n          fix\n        <\/td>\n<td>\n          way\n        <\/td>\n<td>\n          day\n        <\/td>\n<td>\n          money\n        <\/td>\n<td>\n          need\n        <\/td>\n<td>\n          buy\n        <\/td>\n<td>\n          star\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          lot\n        <\/td>\n<td>\n          start\n        <\/td>\n<td>\n          spend\n        <\/td>\n<td>\n          help\n        <\/td>\n<td>\n          rate\n        <\/td>\n<td>\n          like\n        <\/td>\n<td>\n          track\n        <\/td>\n<td>\n          Card Payment\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 4\n        <\/th>\n<td>\n          use\n        <\/td>\n<td>\n          pay\n        <\/td>\n<td>\n          want\n        <\/td>\n<td>\n          account\n        <\/td>\n<td>\n          user\n        <\/td>\n<td>\n          year\n        <\/td>\n<td>\n          fix\n        <\/td>\n<td>\n          note\n        <\/td>\n<td>\n          log\n        <\/td>\n<td>\n          error\n        <\/td>\n<td>\n          recommend\n        <\/td>\n<td>\n          problem\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          star\n        <\/td>\n<td>\n          option\n        <\/td>\n<td>\n          Notification\/Support\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 5\n        <\/th>\n<td>\n          feature\n        <\/td>\n<td>\n          thank\n        <\/td>\n<td>\n          hate\n        <\/td>\n<td>\n          learn\n        <\/td>\n<td>\n          photo\n        <\/td>\n<td>\n          text\n        <\/td>\n<td>\n          job\n        <\/td>\n<td>\n          search\n        <\/td>\n<td>\n          suck\n        <\/td>\n<td>\n          help\n        <\/td>\n<td>\n          tab\n        <\/td>\n<td>\n          tool\n        <\/td>\n<td>\n          weight\n        <\/td>\n<td>\n          weather\n        <\/td>\n<td>\n          group\n        <\/td>\n<td>\n          Account Problem\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 6\n        <\/th>\n<td>\n          work\n        <\/td>\n<td>\n          screen\n        <\/td>\n<td>\n          video\n        <\/td>\n<td>\n          need\n        <\/td>\n<td>\n          notification\n        <\/td>\n<td>\n          device\n        <\/td>\n<td>\n          wish\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          option\n        <\/td>\n<td>\n          set\n        <\/td>\n<td>\n          store\n        <\/td>\n<td>\n          choose\n        <\/td>\n<td>\n          type\n        <\/td>\n<td>\n          food\n        <\/td>\n<td>\n          item\n        <\/td>\n<td>\n          Device\/Design\/Password\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 7\n        <\/th>\n<td>\n          game\n        <\/td>\n<td>\n          play\n        <\/td>\n<td>\n          level\n        <\/td>\n<td>\n          fun\n        <\/td>\n<td>\n          player\n        <\/td>\n<td>\n          watch\n        <\/td>\n<td>\n          make\n        <\/td>\n<td>\n          enjoy\n        <\/td>\n<td>\n          start\n        <\/td>\n<td>\n          graphic\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          win\n        <\/td>\n<td>\n          character\n        <\/td>\n<td>\n          score\n        <\/td>\n<td>\n          lose\n        <\/td>\n<td>\n          Language\/Recommend\/Screen Size\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 8\n        <\/th>\n<td>\n          time\n        <\/td>\n<td>\n          update\n        <\/td>\n<td>\n          try\n        <\/td>\n<td>\n          review\n        <\/td>\n<td>\n          crash\n        <\/td>\n<td>\n          know\n        <\/td>\n<td>\n          let\n        <\/td>\n<td>\n          problem\n        <\/td>\n<td>\n          page\n        <\/td>\n<td>\n          load\n        <\/td>\n<td>\n          waste\n        <\/td>\n<td>\n          want\n        <\/td>\n<td>\n          app\n        <\/td>\n<td>\n          need\n        <\/td>\n<td>\n          version\n        <\/td>\n<td>\n          Graphic\/ Game Design\/ Level and Coin\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          Topic 9\n        <\/th>\n<td>\n          say\n        <\/td>\n<td>\n          card\n        <\/td>\n<td>\n          people\n        <\/td>\n<td>\n          time\n        <\/td>\n<td>\n          work\n        <\/td>\n<td>\n          tell\n        <\/td>\n<td>\n          download\n        <\/td>\n<td>\n          help\n        <\/td>\n<td>\n          datum\n        <\/td>\n<td>\n          issue\n        <\/td>\n<td>\n          happen\n        <\/td>\n<td>\n          support\n        <\/td>\n<td>\n          thing\n        <\/td>\n<td>\n          know\n        <\/td>\n<td>\n          want\n        <\/td>\n<td>\n          Photo\/Search\n        <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<p>Assuming that you have already built the topic model, you need to take the text through the same routine of transformations and before predicting the topic. For our case, the order of transformations is: sent_to_words() \u2013> Stemming() \u2013> vectorizer.transform() \u2013> best_lda_model.transform() You need to apply these transformations in the same order. So to simplify it, let\u2019s combine these steps into a predict_topic() function.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\"># Define function to predict topic for a given text document.\nnlp = spacy.load('en_core_web_sm', disable=['parser', 'ner'])\ndef predict_topic(text, nlp=nlp):\n    global sent_to_words\n    global lemmatization\n# Step 1: Clean with simple_preprocess\n    mytext_2 = list(sent_to_words(text))\n# Step 2: Lemmatize\n    mytext_3 = lemmatization(mytext_2, allowed_postags=['NOUN', 'ADJ', 'VERB', 'ADV'])\n# Step 3: Vectorize transform\n    mytext_4 = vectorizer.transform(mytext_3)\n# Step 4: LDA Transform\n    topic_probability_scores = best_lda_model.transform(mytext_4)\n    topic = df_topic_keywords.iloc[np.argmax(topic_probability_scores), 1:14].values.tolist()\n    \n    # Step 5: Infer Topic\n    infer_topic = df_topic_keywords.iloc[np.argmax(topic_probability_scores), -1]\n    \n    #topic_guess = df_topic_keywords.iloc[np.argmax(topic_probability_scores), Topics]\n    return infer_topic, topic, topic_probability_scores\n# Predict the topic\nmytext = [\"Very Useful in diabetes age 30. I need control sugar. thanks Good deal\"]\ninfer_topic, topic, prob_scores = predict_topic(text = mytext)\nprint(topic)\nprint(infer_topic)<\/code><\/pre>\n<\/div>\n<pre><code>['way', 'day', 'money', 'need', 'buy', 'star', 'make', 'lot', 'start', 'spend', 'help', 'rate', 'like']\nCard Payment\n<\/code><\/pre>\n<p>Final predictions of the reviews in the orignal dataset.<\/p>\n<div class=\"clipboard\">\n<pre><code class=\"language-python\">def apply_predict_topic(text):\n    text = [text]\n    infer_topic, topic, prob_scores = predict_topic(text = text)\n    return(infer_topic)\ndf[\"Topic_key_word\"]= df['Translated_Review'].apply(apply_predict_topic)\ndf.head()<\/code><\/pre>\n<\/div>\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }<\/p>\n<p>    .dataframe tbody tr th {\n        vertical-align: top;\n    }<\/p>\n<p>    .dataframe thead th {\n        text-align: right;\n    }\n  <\/style>\n<table border=\"1\" class=\"dataframe\">\n<thead>\n<tr style=\"text-align: right;\">\n<th>\n        <\/th>\n<th>\n          App\n        <\/th>\n<th>\n          Translated_Review\n        <\/th>\n<th>\n          Sentiment\n        <\/th>\n<th>\n          Sentiment_Polarity\n        <\/th>\n<th>\n          Sentiment_Subjectivity\n        <\/th>\n<th>\n          Topic_key_word\n        <\/th>\n<\/tr>\n<\/thead>\n<tbody>\n<tr>\n<th>\n          0\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          I like eat delicious food. That&#8217;s I&#8217;m cooking &#8230;\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          1.00\n        <\/td>\n<td>\n          0.533333\n        <\/td>\n<td>\n          Card Payment\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          1\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          This help eating healthy exercise regular basis\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          0.25\n        <\/td>\n<td>\n          0.288462\n        <\/td>\n<td>\n          Card Payment\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          3\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          Works great especially going grocery store\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          0.40\n        <\/td>\n<td>\n          0.875000\n        <\/td>\n<td>\n          Device\/Design\/Password\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          4\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          Best idea us\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          1.00\n        <\/td>\n<td>\n          0.300000\n        <\/td>\n<td>\n          Notification\/Support\n        <\/td>\n<\/tr>\n<tr>\n<th>\n          5\n        <\/th>\n<td>\n          10 Best Foods for You\n        <\/td>\n<td>\n          Best way\n        <\/td>\n<td>\n          Positive\n        <\/td>\n<td>\n          1.00\n        <\/td>\n<td>\n          0.300000\n        <\/td>\n<td>\n          Card Payment\n        <\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<h2>7&#46; Conclusion<\/h2>\n<p>In this tutorial, we\u2019ve used the google plays store reviews to generate topics using LDA. We examined two ways to import our data, using (1) GridDB and (2) pandas read_csv. For large datasets, GridDB provides an excellent alternative to import data in your notebook as it is open-source and highly scalable. <a href=\"https:\/\/griddb.net\/en\/downloads\/\">Download GridDB<\/a> today!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it is an unsupervised technique. It is increasingly important to categorize documents according to topics in this world filled with data. As an example, if a company [&hellip;]<\/p>\n","protected":false},"author":41,"featured_media":27479,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[121],"tags":[],"class_list":["post-46701","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.1.1 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Topic Modeling with LDA Using Python and GridDB | GridDB: Open Source Time Series Database for IoT<\/title>\n<meta name=\"description\" content=\"In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Topic Modeling with LDA Using Python and GridDB | GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"og:description\" content=\"In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it\" \/>\n<meta property=\"og:url\" content=\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\" \/>\n<meta property=\"og:site_name\" content=\"GridDB: Open Source Time Series Database for IoT\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/griddbcommunity\/\" \/>\n<meta property=\"article:published_time\" content=\"2022-04-28T07:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-13T20:56:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/griddb.net\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1707\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"griddb-admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:site\" content=\"@GridDBCommunity\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"griddb-admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\"},\"author\":{\"name\":\"griddb-admin\",\"@id\":\"https:\/\/griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233\"},\"headline\":\"Topic Modeling with LDA Using Python and GridDB\",\"datePublished\":\"2022-04-28T07:00:00+00:00\",\"dateModified\":\"2025-11-13T20:56:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\"},\"wordCount\":1745,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/griddb.net\/en\/#organization\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\",\"url\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\",\"name\":\"Topic Modeling with LDA Using Python and GridDB | GridDB: Open Source Time Series Database for IoT\",\"isPartOf\":{\"@id\":\"https:\/\/griddb.net\/en\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage\"},\"thumbnailUrl\":\"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg\",\"datePublished\":\"2022-04-28T07:00:00+00:00\",\"dateModified\":\"2025-11-13T20:56:01+00:00\",\"description\":\"In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it\",\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage\",\"url\":\"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg\",\"contentUrl\":\"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg\",\"width\":2560,\"height\":1707},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/griddb.net\/en\/#website\",\"url\":\"https:\/\/griddb.net\/en\/\",\"name\":\"GridDB: Open Source Time Series Database for IoT\",\"description\":\"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL\",\"publisher\":{\"@id\":\"https:\/\/griddb.net\/en\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/griddb.net\/en\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/griddb.net\/en\/#organization\",\"name\":\"Fixstars\",\"url\":\"https:\/\/griddb.net\/en\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb.net\/en\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"contentUrl\":\"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png\",\"width\":200,\"height\":83,\"caption\":\"Fixstars\"},\"image\":{\"@id\":\"https:\/\/griddb.net\/en\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/griddbcommunity\/\",\"https:\/\/x.com\/GridDBCommunity\",\"https:\/\/www.linkedin.com\/company\/griddb-by-toshiba\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233\",\"name\":\"griddb-admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/griddb.net\/en\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g\",\"caption\":\"griddb-admin\"},\"url\":\"https:\/\/griddb.net\/en\/author\/griddb-admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Topic Modeling with LDA Using Python and GridDB | GridDB: Open Source Time Series Database for IoT","description":"In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/","og_locale":"en_US","og_type":"article","og_title":"Topic Modeling with LDA Using Python and GridDB | GridDB: Open Source Time Series Database for IoT","og_description":"In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it","og_url":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/","og_site_name":"GridDB: Open Source Time Series Database for IoT","article_publisher":"https:\/\/www.facebook.com\/griddbcommunity\/","article_published_time":"2022-04-28T07:00:00+00:00","article_modified_time":"2025-11-13T20:56:01+00:00","og_image":[{"width":2560,"height":1707,"url":"https:\/\/griddb.net\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg","type":"image\/jpeg"}],"author":"griddb-admin","twitter_card":"summary_large_image","twitter_creator":"@GridDBCommunity","twitter_site":"@GridDBCommunity","twitter_misc":{"Written by":"griddb-admin","Est. reading time":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#article","isPartOf":{"@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/"},"author":{"name":"griddb-admin","@id":"https:\/\/griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233"},"headline":"Topic Modeling with LDA Using Python and GridDB","datePublished":"2022-04-28T07:00:00+00:00","dateModified":"2025-11-13T20:56:01+00:00","mainEntityOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/"},"wordCount":1745,"commentCount":0,"publisher":{"@id":"https:\/\/griddb.net\/en\/#organization"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg","articleSection":["Blog"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/","url":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/","name":"Topic Modeling with LDA Using Python and GridDB | GridDB: Open Source Time Series Database for IoT","isPartOf":{"@id":"https:\/\/griddb.net\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage"},"image":{"@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage"},"thumbnailUrl":"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg","datePublished":"2022-04-28T07:00:00+00:00","dateModified":"2025-11-13T20:56:01+00:00","description":"In natural language processing, topic modeling assigns a topic to a given corpus based on the words in it. Due to the fact that text data is unlabeled, it","inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb.net\/en\/blog\/topic-modeling-with-lda-using-python-and-griddb\/#primaryimage","url":"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg","contentUrl":"\/wp-content\/uploads\/2021\/05\/accounting_2560x1707.jpeg","width":2560,"height":1707},{"@type":"WebSite","@id":"https:\/\/griddb.net\/en\/#website","url":"https:\/\/griddb.net\/en\/","name":"GridDB: Open Source Time Series Database for IoT","description":"GridDB is an open source time-series database with the performance of NoSQL and convenience of SQL","publisher":{"@id":"https:\/\/griddb.net\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/griddb.net\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/griddb.net\/en\/#organization","name":"Fixstars","url":"https:\/\/griddb.net\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb.net\/en\/#\/schema\/logo\/image\/","url":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","contentUrl":"https:\/\/griddb.net\/wp-content\/uploads\/2019\/04\/fixstars_logo_web_tagline.png","width":200,"height":83,"caption":"Fixstars"},"image":{"@id":"https:\/\/griddb.net\/en\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/griddbcommunity\/","https:\/\/x.com\/GridDBCommunity","https:\/\/www.linkedin.com\/company\/griddb-by-toshiba"]},{"@type":"Person","@id":"https:\/\/griddb.net\/en\/#\/schema\/person\/4fe914ca9576878e82f5e8dd3ba52233","name":"griddb-admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/griddb.net\/en\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/5bceca1cafc06886a7ba873e2f0a28011a1176c4dea59709f735b63ae30d0342?s=96&d=mm&r=g","caption":"griddb-admin"},"url":"https:\/\/griddb.net\/en\/author\/griddb-admin\/"}]}},"_links":{"self":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts\/46701","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/comments?post=46701"}],"version-history":[{"count":1,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts\/46701\/revisions"}],"predecessor-version":[{"id":51375,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/posts\/46701\/revisions\/51375"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/media\/27479"}],"wp:attachment":[{"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/media?parent=46701"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/categories?post=46701"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/griddb.net\/en\/wp-json\/wp\/v2\/tags?post=46701"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}