What Do Emotions Look Like? Let’s Ask The Computer

Creating Pixelated images using emotions from comments on the YouTube video

What Do Emotions Look Like? Let’s Ask The Computer

The Story

A few days ago I watched a YouTube video by Tom Scott in which he uses YouTube API to update the title of his video in real-time to match with the number of views he got in that video (link to his video). Now, that seemed pretty straightforward to me so I thought, let’s take it one step further.

In this post, I will show you how you can extract emotions from comments on your YouTube video and convert that into a pixelated image with their name, profile pic, and emoji of emotions on the thumbnail. Does that sound confusing?

This is how it will look-

Those emojis represent the image and feelings on the comment.Those emojis represent the image and feelings on the comment.

Now, I’ll be honest with you. This is the most naïve way of representing emotions as an image. I will put down the links below on how you can do it in a better way using GAN. But it was a fun project to work on and it taught me a lot about YouTube API and OpenCV, and so much more. Also, I have a YouTube channel so I thought it will be interesting to see people’s reactions.

The Workflow

Step 1: Get the comments from YouTube using YouTube API.

Step2: For each comment, break it down and categorise each word into different emotions.

Step 3: For each emotion choose a colour and use some kind of rule to create an image.

Step 4: Use the gathered comment data (name, profile pic of the commenter) and display it over the image.

Step 5: Update the thumbnail of the video every 10 minutes.

Let’s start with step 1

There are more functions in my GitHub repository, like to get a collection of YouTube video id’s for any channel or to get channel id of any Youtuber. patni11/UTOC Contribute to patni11/UTOC development by creating an account on GitHub.github.com

To gather comments, we need videoId of any video. We can get it by selecting this part of YouTube video after ‘?v=’ ->

This is the video IDThis is the video ID

if you want videoId’s of multiple videos, ex- all the videos in your channel you can use get_youtube_videos_ID(id): function in my repo.

Now that we have videoID, we can go ahead.

This code takes video_id as a file, you can modify it to work with any string. We will change this code later to work with collected comments, for now, it just collects the comment, name of the commenter, and his/her profile image using the JSON data. You can find more info about the structure of this JSON data from here.

Step 2: Analysing the comment

Now that we have enough data about the comment, it’s time to analyse it.

I commented the above code in detail. Here is the overview —

Definition of lexicon (screenshot)Definition of lexicon (screenshot)

We call the main() function and pass it to the single comment. We convert the comment into a pandas data frame and pass it to extract_emotions() function to analyse it. We will use the NRC lexicon for the analysis. To download the lexicon file, click here. (I got some help from this post by Greg Rafferty. he has done Sentiment Analysis on the Texts of Harry Potter, be sure to check it out). In the extract_emotions() function, we create 2 more data-frames (emo_df, total_df), emo_df to store all the emotions, and the other to store the number of emotions detected for each comment. We concatenate all that information and return it to our main function. We pass on this data to clean_data() function to get what we want to create an image. clean_data function returns emotions_values i.e how many words with a particular emotion was detected, sentiment i.e. whether it was a positive or a negative comment, total how many emotions were detected with and without including sentiment, and word count of the comment.

Now, this file can analyse any text stored in a file or passed on individually. It can be used in other projects as well. However, a better way of doing this would be to use LSTM networks. That requires a lot of training and computing power which I do not have, but you can use the following links to implement it.

sentiment analysis using LSTM by lamiae hana- A step by step Guide on Sentiment Analysis with RNN and LSTM Here you’ll be building a model that can read in some text and make a prediction about the sentiment of that text…medium.com Text to Image Synthesis Using Generative Adversarial Networks Generating images from natural language is one of the primary applications of recent conditional generative models…arxiv.org

The problem is that these methods require you to define an image. Ex — This flower has 8 petals and is red. But with YouTube comments, we don’t get any data like that, therefore you have to train your own model.

Step 3 and Step 4: Creating the Image

The above code is commented in detail. Here is the overview —

We call the main function and pass it all the data we previously collected. It takes all that data and creates an image based on Plutchik’s Wheel of Emotions.

Plutchik’s wheel of emotion (image from positivepsychology.com)Plutchik’s wheel of emotion (image from positivepsychology.com)

I just took the RGB colour values from this image using the Digital Color Meter Utility app on my Mac. generate_img function loops through each pixel on the image and for each pixel it gets the RGB values from get_rgb function. get_rgb function first chooses a particular emotion using weighted random value and then creates RGB value for it. The red channel is incremented by the amount of negativity and blue channel by the amount of positivity in the emotion (look at the code to have a better understanding). As OpenCV requires BGR instead of RGB, we use convert_to_bgr function, which is quite straightforward. Finally, after looping through all the pixels, we get a pixelated image. Now we put profile picture, detected emojis, and commenter username using OpenCV.

image generated with all the emojis of the emotions and the name of the commenter.image generated with all the emojis of the emotions and the name of the commenter.

Plot Twist: Now this (generate_img) function only gets called if we detect any emotions. What happens when no emotions are detected?

Well, I thought of doing something interesting. So if no emotions are detected on the comment, the thumbnail of the YouTube video will become this ->

Thumbnail when no emotions are detected on an image.Thumbnail when no emotions are detected on an image.

Step 5: Updating the thumbnail of the video

Now, you remember I told you we need to update the gathering_comments.py code? Let’s do that now.

Here we will first check if there are any unseen comments, if true then we will go to our generate_image function else, it will show this image ->

If no comments are availableIf no comments are available

Why this image? Because this was the first image that I created using affinity photos and a Skillshare course 😅. You can put anything here.

Now you must have api key and OAuth 2.0 enabled to receive comments and edit thumbnails. you can find how to set it up here

Note

If you want to update the thumbnail on your video forever, put this code on a server and it will work. Also, if you have any missing files or issues, check with my GitHub repo here - patni11/UTOC Contribute to patni11/UTOC development by creating an account on GitHub.github.com

References:-

I really liked Greg Rafferty post on sentiment analysis on Harry Potter Books:- Basic NLP on the Texts of Harry Potter: Sentiment Analysis With bonus tutorial of Matplotlib advanced features!towardsdatascience.com

Also, this YouTube video:-

https://www.youtube.com/watch?v=cUx9qUkYJgk

Did you find this article valuable?

Support Shubh Patni by becoming a sponsor. Any amount is appreciated!