OpenAI’s GPT-4o is a multimodal AI model capable of understanding and generating various forms of data, including text, images, and videos. In this post, we will explore the new possibilities of using this multimodal model to visit exhibitions with an AI influencer.


GPT-4o is an AI model capable of understanding and generating various types of data. The key strengths of this model are as follows:

  • Understanding Various Data Forms: It can simultaneously process and understand different types of data, such as text, images, and videos. This provides users with a richer and more intuitive interface.
  • Real-Time Q&A: It interacts with users in real-time, offering accurate information based on the input data. This allows users to receive immediate feedback.
  • Contextual Understanding and Generation: It comprehends given situations and contexts, generating appropriate responses accordingly. This greatly helps in personalizing and enhancing the user experience.

AI Influencer Tyri

By leveraging the strengths of GPT-4o, an AI influencer can provide a new form of experience by accompanying users through exhibitions. The AI influencer can answer users’ questions in real-time, provide information about the exhibited artworks, and share personal interpretations and evaluations of the pieces. For more insights into Tyri’s daily life, please visit the blog below.

Method of Inputting Recorded Videos into GPT-4o

The screen is divided into two sections: the left side displays the input screen or video, and the right side shows the video being processed by the AI along with a chat interface. This setup uses 5 frames for video processing.


To view the source code for inputting images into GPT-4o, please click the link below:

Through this process, we explored the exhibition with Tyri.