Fun that the idea is not to guess the movie, but to get the robot to guess it. I wish I could to use the regular emoji keyboard to pick the emoji, though.
I ended up trying to funnel people towards the emoji picker so that I could:
1) prevent unknown emojis outside the training window of GPT 3.5. As an example, ChatGPT said that this one was a box of falafel.
2) exclude certain emojis that had a high likelihood of leading to GPT refusing to guess (it would claim were too generic).
Some technical notes. Even for something like this, the OpenAI API can be expensive - there's a few decisions I made that keep costs lower.
1. I cache guesses and responses in Redis when I first see them, so I don't need to hit the API for duplicate guesses (which are common when lots of people are guessing the same movies).
2. Emoji order doesn't matter (I apply a sorting to guesses).
3. Guesses are not path dependent, with each guess being treated as completely new. This is understandably annoying when GPT guesses the same incorrect movie multiple times, but it drastically improves the cache-ability and keeps me from hitting my API limits.
Happy to answer any questions here! If you'd like to reach out to me my email is in my bio.
Also, do you have stats of the guesses? Would be interesting to see how many people managed to guess correctly within 1/2/3 attempts, and what were the most common emoji combinations for both correct and incorrect guesses.
For The Wolf of Wall Street, the following result using wolf + money bag + roadway was considered wrong:
> The guess is: The Wolf of Wall Street Reason: The movie is about the rise and fall of Jordan Belfort, a stockbroker who defrauded investors out of hundreds of millions of dollars, earning him a lot of money in the process. The road emoji may represent the stock market, which is a symbol of the financial industry. Additionally, the wolf emoji can be seen as a reference to the word "wolf" in the title of the movie, while the emoji...
interesting, thanks for flagging -- this looks to be a parsing issue when ChatGPT occasionally returns guesses in a format different than the one instructed to. I'll give this some more thought on how to fix...
Pretty entertaining concept, but I guess the site was developed with mobile in mind ? It's frustrating to use on a laptop, it looks like about 80% of the page is unused black space, with a big bar at the bottom, and the actual emoji area is smaller still, I can see about 3 rows worth of icons.
La La Land: The rainbow emoji and cocktail emoji represent the colorful and musical theme of the movie, and the police officer emoji represents the main character's job as a jazz pianist who falls in love with an aspiring actress in Los Angeles.
nice! I don't limit the search space in the system prompt to GPT, though each daily puzzle is using popular movies. I think for the top ~100 movies it'll probably do a fine job.
If this game goes on for a full year though, I imagine it's going to really struggle unless the movie is about something really unique.
Ah, I see! You of course pick puzzles from the most popular movies first, and GPT of course lands on the most popular ones too. Interesting dynamic, makes it much easier in the beginning. I was wondering how it can work so well with only three emojis.
This is incredibly fun. Very fun and addictive, and brings the fun part of a 2-player game to a single-player experience. This is a brilliant way to use the LLM.
Wish list: It would be super cool if we could use the emoji skin colors.
Is chatgpt given a list of films to guess from beforehand, or does it just guess the film outright, no extra info? I wouldn't have guessed inception for [Puzzle-piece, scientist, city]
I asked ChatGPT for lists of movies and TV shows that it knows about and used those, to avoid getting into situations with movies published after the training window
I think you misunderstood the question. When it is guessing, is it guessing "which of these 50 movies is represented by these 3 emojis" or something, or is it free to guess any movie?
A bit related : I found out today that Discord is working on a feature to automatically add Emojis to channel names. You can enable it in the experiments panel by writing a bit of JavaScript in the console.
What I find interesting is that the emojis are always very accurate and the emoji name is not contained in any way in the channel name so I feel like they are using an LLM to do so and it seems like a great application.
I wanted to put this bug report in its own thread so it wouldn’t distract from my main comment: oddly it told me on my first round today that “Black Panther” wasn’t the right guess for “Black Panther.” I had used the prompt ⬛ ⬛ and it said:
“Black Panther ”
I tried a slightly different variation and it guessed BP again and was accepted.
One more quibble after playing today (Superbad was hard). I feel like GPT should not re-guess the same wrong movie multiple times. Perhaps you could remind it in your prompts that it’s “not A or B” on turn C.
Edit: looks like some kind of emoji detection feature? I don't understand why when you can just serve an open source emoji font, but I suppose at least it's not tracking this time.
Thank you! Sadly I removed the thumbs up/down emojis and okay hand emoji -- when these were used in clues, GPT 3.5 tended to complain that the emojis were too generic and refuse to guess.
Should be back up, sorry about that! For anyone else hitting this you may need to refresh and try again in a few minutes -- the OpenAI API sometimes complains but I don't handle this so gracefully yet.
When I soft launched it 3 weeks ago I was doing TV shows, but 3x shows per day quickly exhausted my initial list, so I switched to movies over the past few days
From testing as I developed this, I noticed that certain emojis often lead to complaints from GPT 3.5 about how the combinations were too generic. After some really basic testing I ended up removing the 3 worst offenders from the emoji picker: ['+1','-1','okay_hand'].
country flags are enabled - the search displays the Italian flag if I type in "italy". However if I type "italian", it doesn't match to the flag unfortunately.
They weren't there for me on Windows. Another one I found missing is the superhero emoji. (I'm not missing those emojis on the system, because elsewhere they work.)
Now I'm on macOS and I don't see anything missing other than the ones that you intentionaly removed.