The outcome of the 3D animation portion of this project is relatively satisfactory. However, there remains significant room for improvement in the stop-motion segment. Key areas that require further development include the clean-up of individual frames, refinement of the background, and the overall quality of the sketches. These elements currently lack the level of polish I would expect in a finalized piece.
This project marked my first encounter with stop-motion animation, a medium that proved to be both challenging and rewarding. I have come to understand that stop-motion demands considerable patience, meticulous planning, and an intensive post-production process. Despite the difficulties, this experience has allowed me to develop a new skill that broadens my creative capabilities.
It is important to note that this version of the project is not yet finalized. Due to time constraints, I am only able to submit the current iteration as a preliminary result. As such, I do not feel confident incorporating the stop-motion work into my Final Major Project (FMP) in its current form, except perhaps as supportive material or a credited experiment. However, I plan to continue refining the animation, lighting, and environment over the summer, with the aim of potentially including an improved version in the final FMP presentation.
Today’s session focused on learning the basics of Resolume Arena, the fundamentals of VJing, and how to use a MIDI controller in a live visual performance setup.
Key topics included clip triggering, layer management, and effect controls in Resolume, as well as mapping MIDI hardware for more intuitive real-time control.This foundational session sets the stage for more advanced creative workflows in future projects.
We could edit the workspace into the way we like to work, for instance, we can choose to press any button on the keyboard to control the clip we wanted to.
Can do the same thing to a MIDI controller. A MIDI controller refers to a controller which you could professionally use in different events for festivals or even house party, it seemed really fun to me.
Experimental Project Progress
This week, progress has been limited due to a heavy workload from two other submissions, as well as ongoing tasks like the thesis proposal and the voice acting spline. However, I did manage to re-record the stop-motion segment for the first part of the animation to achieve a cleaner version and ensure better alignment with the 3D portion.
I added this shot which is having the same action as the 3D animation to make transition from stop-motion to 3D smoother.
This is also another test with the same shot but with parachute appearing at the end. I will hand draw the animation of the parachute later on.
For the car crash part, I also re-recording the stop motion as I think that the previous version is not that good and kind of rough.
I also re-recorded the stop-motion part for the pop part where the balloon pops when its in the sky. I think for this part, if sound effect is added, it would be more realistic and understandable.
For 3D animation, I started gathering the environment together and comparing which aesthetic and style suits my animation more. Seems like the right one fits better.
This week, I completed the credit sequence of the animation and began focusing on refining the full animation by adding textures, correcting transitions between shots, and performing other polishing tasks. However, I feel that more time should be allocated to the animation process, as the current timeframe is quite limited and insufficient, which raises concerns about the overall quality of the final outcome.
During the post editing process, I have been testing with different backgrounds, and I also realised that there are some issues going on. Initially, I wanted to hand drawn everything like the background, characters etc, but the style does not really match with the claymation(Left image).
This was the version without the green screen and the rigging.
After adding the credits text and reducing the size of the animation frame helps making it looks more poilished and not that buggy.
This is the initial version of the whole animation for the submission, but so many things to polish, so I will keep the work going on and finish it as soon as possible.
In today’s lesson, we explored the process of creating 360° videos. I’ve always found the 360° perspective fascinating, whether in games, animations, or virtual experiences. Unlike traditional formats which lock the viewer into a single, director-chosen angle, 360° videos invite us to take control of the experience.
What makes this format so engaging is its interactivity. As viewers, we’re no longer passive observers; we can choose where to look, allowing us to feel immersed and even part of the scene. This sense of presence adds a new layer of storytelling that I find far more exciting than static viewpoints.
To start with, we had to download a plugin from OFF World.
This plugin is also useful because it includes Sprout.
Sprout refers to a feature that enables different software programs to connect with each other. For example, when OFF is added to Unreal Engine, it allows us to link it with TouchDesigner. This means we can view and interact with what is happening in TouchDesigner in real time within Unreal before rendering anything. It creates a smoother, more integrated workflow between platforms.
We could then start with the 360 camera.
To create a 360° video, we first adjusted the project settings and added new Blueprints within the level. We then used the Sequencer to animate the scene, followed by rendering the output. Finally, we made minor edits using Media Encoder. With these steps completed, the 360° video was successfully produced.
Experimental Project Progress
As part of my experimental project, I have been testing various aspects of the credit sequence in stop motion animation. Key considerations and constraints I encountered include:
Camera Limitations
Zooming in or out is not possible during filming.
Choice of camera setup (e.g., top view vs. front view) must be finalized beforehand.
Stability and Movement
The stability of the armature directly affects the flexibility and smoothness of the animation.
The tripod was taped to the floor to ensure a stable shooting setup.
Frame Rate
Different frame rates were tested to determine their impact on motion quality and pacing.
Editing Techniques
Explored the use of copy-and-paste techniques to streamline the editing process.
Lighting Conditions
Lighting must remain completely consistent; any changes can disrupt visual continuity.
Frame Ratio
The aspect ratio needs to be determined before shooting, as changes are difficult later.
Pre-Production Planning
Stop motion requires detailed and precise setup in advance due to the limited ability to make changes once filming begins.
In this version, I accidentally switched on a light during the final part of the video, which resulted in an abrupt and noticeable change that disrupted the visual consistency.
This is some testings with another shot, but I will fix it and polish it soon.
Although this version maintained consistent lighting, the frame size appeared distorted. It looked different after export compared to how it appeared during editing. I later realized that the frame size should have been properly adjusted before filming the animation.
This was another test using the previous version of the rig, which was more fragile and allowed movement only in the tail.
This is the final selected shot for the animation, chosen after extensive adjustments, testing, and experimentation prior to filming. However, there is still considerable post-production work remaining. I plan to hand-draw the side characters and background, remove the green screen and rig from the footage, and add sound effects and music to complete the project.
Here is another behind-the-scenes video of my process. Stop motion animation is a very time-consuming and patience-demanding task, as adjustments cannot be made on the spot. Instead, any errors require redoing the sequence from the beginning.
Although we began learning about motion capture in the early weeks of the semester, I still find it fascinating. Becoming more familiar with the process has increased my confidence and opened up greater opportunities to apply it in the future.
This time during the motion capture session, I experimented with a range of movements, including fencing, falling, sitting in a chair, and playing the guitar, among others. I found the experience more enjoyable than before, though I still felt a bit awkward while acting. I’m looking forward to the upcoming acting workshop, as I believe it will help me better understand how to perform specific actions effectively for animation.
Experimental Project Progress
I have started doing some testing and have begun animating using stop motion.
I find stop motion animation harder and more time consuming than 3D animation. It might be just as time consuming as 2D animation because you have to create every single frame yourself, instead of only doing the key poses and letting the software generate the in-betweens. Another limitation is that once you finish shooting the stop motion, you cannot easily make changes to keyframes like you can in 2D or 3D animation.
Also, I cannot zoom in with the camera or software to get a closer view of the shot. I have to physically arrange the props or move the camera closer, which adds extra challenges during production.
Despite these difficulties, I find stop motion very appealing because the small imperfections make the animation feel more human and less mechanical or computer-generated. I want to embrace these errors and treat this project as an experimental opportunity to explore and play with stop motion.
Although I expect my teacher may have many critiques on the animation later, I am excited to learn from the process.
Yesterday I tested a small stop motion animation. It was really challenging to animate every single body part because the armature is quite fragile and tends to break after each movement even within just these two seconds of the video.
I will work on improving the armature since it still has several limitations.
In this week’s lesson we get to choose one thing that we wanted to explore and swap next week.
Projection Mapping
Today, we learned how to link an application with a projector to enable projection mapping. One key feature we explored was the masking tool, which allows users to draw and define the shapes or surfaces they plan to project onto. Once the shapes are mapped, we can import our animations into the software and align them accordingly.
Additionally, we were introduced to a tool similar to a sequencer in MadMapper. This feature enables users to organise and control the playback of animations in a sequence, allowing for smooth transitions and dynamic presentation control.
I found this tool to be extremely useful and see strong potential for incorporating it into future work. I had considered using projection mapping before, but I had never taken the time to learn the process. To my surprise, it was more accessible than I expected. Given its creative possibilities and ease of use, I will definitely consider using it, possibly for my Final Year Project exhibition.
This session was a valuable introduction to a technique that combines technical skill with creative presentation. It has opened up new possibilities for how I can enhance my future visual projects.
Experimental Project Progress Update
I am currently in the testing phase of my project, experimenting with different armature techniques and aesthetic styles. I am also exploring whether the model might work better without an armature at all.
At first, I tried sculpting my character using only clay, without any internal support. However, the clay could not hold its shape. It would not stay together and kept falling apart.
This led me to research how to build effective armatures to give the model structure and stability.
This process of trial and error is helping me understand not only the technical needs of the materials but also how the inner structure can affect the overall look. I am looking forward to improving both the form and the style as I continue.
Although it sticks well to the body, the shape looks strange.
This is the simplest armature tutorial video I could find and try, but it does not seem to work well. I had trouble adding clay onto the armature because the clay kept breaking when I tried to push it in.
I tried another technique by poking the wire into the clay and attaching them together, but it did not work well either. When I moved the rig, it easily fell apart. I will need to fix this issue using other methods.
For the aesthetic, I want to keep everything simple. I plan to draw the background, characters, and props using white lines so that the focus stays on the balloon dog. I will do more testing to see how well this style works.
Building upon previous feedback and research, I have developed a new narrative element to integrate into my 3D animation. Specifically, I plan to include an imaginative sequence from the perspective of the balloon dog, envisioning what might happen moments before it is nearly struck by a car. This addition allows me to incorporate a distinct visual style—potentially through stop-motion or an alternative technique—to represent the dog’s internal thoughts or fears. Not only does this enhance the emotional resonance of the scene, but it also aligns with my broader exploration of using mixed media to signify shifts in perception or reality within the narrative.
In addition, I have considered incorporating stop-motion animation into the credits for both the stop-motion and 3D portions of the project. This decision was inspired by the traditional practice often seen in cinema, where distinct stylistic choices are made to enhance the overall viewing experience. I believe this could add a unique and playful touch to the final presentation.
However, I must ensure that the transition between the 3D animation and stop-motion is seamless. To achieve this, I plan to match key elements, such as items or colors, across both mediums, maintaining a cohesive aesthetic. It is essential that I confirm these design choices early in the process to avoid unnecessary revisions, which could lead to inconsistencies in the final result. Additionally, I intend to vary the camera angles—avoiding repetitive shots in the same direction—to keep the sequence visually engaging and dynamic.
Currently, I am in the process of preparing the stop-motion animation, and I aim to begin filming as soon as possible, as the project timeline is quickly approaching its end.
nDisplay is a technology used to render and project video content onto screens with non-standard or irregular geometries. A prominent example of its application is in 3D anamorphic displays.
These types of screens are increasingly prevalent in densely populated urban areas. I have often found them visually striking and have long been interested in understanding the underlying physics. Observing the displays from different angles appears to produce varying visual effects, which has further sparked my curiosity about the principles that govern their operation.
How to make NDisplay?
“Such visual effects can be efficiently rendered in Unreal Engine using nDisplay through a relatively straightforward process. The first step involves initiating a project configured for nDisplay support.
Next, the target display surface—intended to project the video content—is imported into Unreal Engine.
Subsequently, the video content is mapped onto the imported display geometry to ensure accurate alignment and visual coherence.
That essentially completes the process.
One particularly interesting aspect is that the animated content is typically positioned behind the screen geometry. This configuration is essential for maintaining correct perspective and reflective behavior, which contributes to the illusion of depth and realism.
Additionally, I was surprised to learn that the content displayed across multiple screens must be rendered separately, rather than being treated as a single continuous display. I had initially assumed it functioned as one unified surface.
Furthermore, the 3D visual effect is only perceptually effective from specific viewing angles and positions. Outside of these optimal vantage points, the illusion can break down, resulting in distortions or visuals that appear incongruent with the intended perspective.
Progress on my experimental project
This week, I have been refining my concept in response to constructive feedback from both my primary tutor and a specialist in stop-motion animation.
Initially, my intention was to create a stop-motion animation trailer to promote a larger 3D animation project on platforms such as Instagram and other social media. Consequently, I opted for a portrait-oriented frame to align with the visual standards and user behavior typical of these digital platforms.
Upon further reflection, I came to realize that producing a stop-motion trailer for a 3D animation could potentially create confusion for the audience, as the two mediums convey different visual expectations and stylistic cues. This realization prompted a significant shift in my approach: rather than using stop-motion as a standalone promotional tool, I decided to integrate it within the 3D animation itself as a narrative device. This method enables the stop-motion sequences to serve a functional purpose within the story, enhancing the narrative by representing specific moments such as a character’s imagination, memories, or visions of the future.
This technique has been employed effectively in various films and animations to signify shifts in perspective or reality. For example, in Enchanted, the transition from traditional 2D animation to live-action cinematography underscores the protagonist’s movement from a fantastical realm into the real world, providing a clear visual distinction between the two settings.
A particularly compelling example of this technique can be observed in Black Myth: Wukong. In this game, each time the protagonist, Wukong, defeats an enemy, the narrative shifts to reveal the backstory of that character. These flashbacks are presented in a variety of visual styles, including 2D animation and stop-motion. Rather than feeling disjointed or inconsistent, these transitions enrich the storytelling by offering a distinct aesthetic for each character’s history, thereby enhancing the emotional depth and narrative complexity. Moreover, this approach showcases the artistic versatility of the studio, highlighting its ability to work across multiple animation styles. This not only sustains viewer interest but also reinforces the creative identity of the production, making the overall experience more dynamic and engaging.
This multi-style approach to storytelling is something I am seriously considering incorporating into my own project. It offers an opportunity not only to enrich the narrative but also to demonstrate a broader range of technical and creative skills.
In addition to conceptual development, I also visited the stop-motion studio this week to familiarize myself with the available equipment. This hands-on exploration gave me a clearer understanding of the practical resources at my disposal and will inform how I plan and execute the stop-motion segments of my animation.
Following my tutor’s recommendation, I explored a practical setup for capturing stop-motion animation, which involves positioning the camera overhead, placing the character centrally within the frame, and situating the background elements—or a green screen—beneath. I found this method to be both efficient and intuitive. Compared to using a tripod or rig to suspend the character, which would require time-consuming post-production work to digitally remove the support structures, this approach offers a cleaner and more streamlined workflow during both production and editing stages.
In this course, we explored the process of recording video within Unreal Engine environments using mobile devices. This functionality can also be extended to support motion capture applications.
The workflow involves several key steps:(1) enabling the necessary motion capture plugins, (2) establishing a Live Link connection, (3) configuring remote sessions, (4) utilizing the Take Recorder in multi-user mode, (5) setting up the virtual camera, and (6) installing the Unreal VCam application on a mobile device.
During the practical sessions, we encountered several technical challenges related to network connectivity between the mobile devices and the host computer. These issues suggest that there are certain limitations inherent to this method, particularly in institutional or shared network environments.
To further evaluate the feasibility and performance of this approach, I intend to conduct additional testing using my personal device and home network.
Progress on my experimental project
By this week, I have developed greater clarity and confidence in my chosen direction. I have long been interested in exploring stop-motion animation, and this project presents an ideal opportunity to engage with the medium. Stop-motion allows me to deepen my understanding of animation through a more traditional, hands-on approach. Additionally, it offers a welcome break from prolonged screen time, which helps reduce eye strain.
To support this endeavor, I reached out to the Stop Motion Department at LCC. They kindly provided me with an induction, including access to materials, facilities, and relevant resources, as well as information on lectures and workshops available for further learning. This support has proven invaluable, enabling me to begin experimenting with different materials, developing concepts, and making full use of the resources provided.
I visited the LCC Arts Shop to purchase non-drying clay suitable for stop-motion modeling.
I immediately began sculpting my character with it.
While the clay was relatively easy to shape, I encountered difficulty attaching the body parts securely, as the material’s non-drying nature made it challenging to achieve stable connections. This presented a potential issue for animation, as unstable joints could hinder smooth movement.
To address this, I incorporated an internal armature, similar to a rig in Maya, to provide structure and maintain the integrity of the model during animation. This solution allows for more controlled and consistent manipulation of the character throughout the animation process.
We have now entered a new term, with a focus on the concept of experience—both in terms of exploring it and experimenting with how it can be created and understood. In today’s session, we began by examining a series of definitions that distinguish between various roles and forms of engagement, which are crucial when designing experiences across different mediums.
The term user was particularly emphasized and was contrasted with other related terms:
Audience – someone who watches
Customer – someone who buys
User – someone who actively engages or does
Character – someone who performs
Player – someone who plays
Avatar – someone or something that replaces the self in a virtual or narrative space
This breakdown highlighted how the notion of a user is often at the center when we speak about creating interactive or immersive experiences. It was humorously noted that the only individuals who refer to customers as users are drug dealers—underscoring the importance of using terminology precisely, especially in fields like design, technology, and media.
We were also introduced to the work of artist and world-builder Ian Cheng, particularly his speculative science fiction project Life After BOB. This animated film and the subsequent installation, which was exhibited in Berlin, offer a compelling example of how narrative, technology, and audience experience can be integrated. Cheng is known for his expertise in constructing complex digital worlds and interactive systems, and his interviews were recommended as valuable resources for further insight into innovative experience design.
This lesson not only introduced foundational terminology but also encouraged us to think critically about how different kinds of users engage with experiences, systems, and narratives—an essential perspective for any creative or research-driven practice in experience design.
Ideation
After the introduction to experience design, I began generating ideas for my experimental project:
Extension of my FMP Develop the balloon dog further by making it interactive, allowing for greater audience engagement and memorability.
2. Experiments on my FMP Test whether plain 3D animation is sufficient or if additional elements—such as interactivity or immersion—are necessary.
3. My Areas of Interest
Stop-motion
InteractiveInstallation
Regardless of the direction, I must define a clear target audience and consistently design the experience around their needs and expectations.