
SUNO AI
01
DESIGN CHALLENGE
In this project, my goal is to create a short promotional animated video that showcases Suno's creative songwriting capabilities while visually enhancing the audio through AI-generated animation.
02
RESEARCH
The target audience for this project includes anyone with an interest in music. So I selected two styles that resonate with me personally. Viking folk music is a revivalist movement that draws inspiration from historical knowledge and Norse aesthetics. It combines traditional instruments, reconstructed sounds, and modern techniques.
R&B (Rhythm and Blues) is a genre of music that originated in the 1940s and has since become a cornerstone of popular music. It combines soulful melodies and dynamic rhythms, often delving into themes of love, relationships, and personal struggles.

Heilung, music group


Traditional Viking Costume
R&B aesthetic

R&B aesthetic
03
DIRECTION 01
IDEAS
"Viking Folk music"

Before creating the music, I first developed two style frames. For the Viking music, the words that immediately came to mind were pirates, the sea, and Norse mythology. I aimed to give the style frames an epic feel by incorporating an oil painting aesthetic and adding ancient Nordic text.
DIRECTION 02
"R&B"

For the R&B music, I aimed to create a dreamy, floating atmosphere. I chose purple, blue, and pale yellow as the primary colors and added a subtle blur to the text to enhance the ethereal feel.
04
MUSIC DEVELOPMENT
After selecting two music styles, I began using Suno to produce music. Suno offers a high degree of flexibility, allowing users to create music easily, even without a background in music.


LYRICS
01.
In the creation interface, the first module focuses on lyric creation. Users can either input their own lyrics or use Suno's built-in AI to generate lyrics quickly. To create instrumental music for streaming, simply click 'Instrumental' in the upper-right corner. It's worth noting that adding a 'tag' before each verse is incredibly useful, as it allows you to incorporate instruments, chants, or other elements into the song.
STYLE
02.
In the second module, users can choose the song's style.
For example, when creating an R&B-style song, I used the following style keywords: Blues and Rhythm, Soothing, and Melody.
TITLE
03.
The final module is the simplest and most straightforward. Users can add song titles here, either by entering them manually or allowing Suno to generate titles automatically.
05
STYLEFRAMES DEVELOPMENT
After creating some music, I started using AI to generate style frames. I used stable diffusion as the main image to generate AI.

In the first part of the creation interface, users can select from the style templates provided by Stable Diffusion. I found this module particularly useful when producing works in the same series, as choosing the same style helps maintain consistency across the results.
The second module is similar to most AI tools, allowing users to input various details about the images they want to create in order to achieve the best results. Additionally, users can upload existing images and use them as a foundation to generate new ones.

I didn’t use the third module too much, but I’ve found that adjusting the creativity level can lead to surprising results. The higher the creativity level, the more the image will differ from the original. For images that are similar yet distinct, a value between 0.5 and 0.75 works best.
The last module is the simplest, where the user can decide the size of the output image.





05
ANIMATIONS
1st pass animation
For my first animation pass, several adjustments need to be made. First, I need to change the font, as the current one is not concise enough. Second, I need to unify the colors across all the shots. Additionally, I plan to add some extra animations, which can be created in After Effects.

Revision pass animation
In the revision pass of the animation, I added transitions between shots to enhance cohesiveness and flow.

2nd pass animation
In the second version of the animation, I adjusted the color palette across all the shots to ensure better consistency and changed the font at the beginning. Also, more animations were added to most of the shots.

Final pass animation
For the final animation pass, I updated the font for the subtitles to make them more concise and synced the logo animation with the ending of the music for a smoother finish.

06
TECHNICAL EXECUTION

I used Runway to create the video. Unlike Stable Diffusion, Runway requires greater attention to the accuracy of the input details. In the first module, I simply input the image I wanted to include in the video, but the second module demands precise descriptions to achieve the desired outcome.
When providing descriptions, separate the details of the scene, subject, and camera movements into distinct sections. This structured approach ensures better results. The basic structure should be: [camera movement]: [establishing scene]. [additional details]
For example, A clear, well-structured input description differs significantly from a confusing one, as it can greatly impact the quality of the results:

Text input: Wide-angle shot: The camera focuses on a man slowly floating in the air, with dramatic motion in the background scene.

Text input: "I want this shot to zoom in."
07
FINAL RESULTS

In summary, this project provided an excellent opportunity to explore new AI technology. Suno is so powerful that I may consider using it for future projects.