Moov.AI: Exercise. Safely. Anywhere.

Naomi Nakanishi
Apple Developer Academy PUCPR
10 min readOct 28, 2022

--

Moov.AI screens preview

Moov.AI is the app for those who seek a balance between the comfort of working out at home and the security of professional guidance.

Developed with a body tracking technology, associated with the knowledge of a physical education professional, Moov.AI detects wrong movements and provides real time feedback, preventing lesions during workout practice.

The Problem

With the rush of day-to-day life, people struggle to balance work and personal life, making it difficult to find some free time to take care of themselves.
Workout dedicated places such as gyms can seem overwhelming for some people, too expensive for some, too time-demanding for others.
On the other hand, it isn’t easy to workout safely at home. Even for people with experience, it is not easy to understand if the movements are being performed correctly, increasing the risk of injuries.

The Solution

Moov.AI gathered professionals to design workouts that give results without the need for equipment.
Using body tracking technologies associated with machine learning, the app learns the rights and wrongs for each exercise, giving its users assistance by correcting wrong movements with voice and visual feedback, making it possible to exercise safely anywhere.

Methodology

As all apps developed in the Apple Developer Academy, this project followed the Challenge Based Learning framework.

First, we found our common interest for our big idea, followed by our essential question and challenge:

CBL Engage: Big Idea, Essential Question and Challenge

Starting our Investigate stage, we came up with guiding questions, and based on our findings we all brainstormed the ideas we had for the app.

User research

To make this app feasible, we gathered information from the two sides: teachers and students.

For teachers, we wanted to know how they assign a workout routine to students:

  • What data they need to assign specific exercises
  • What platforms they use as a remote environment
  • What struggles they go through to follow their students’ performance and progress

For students, we wanted to understand the purpose behind their behavior:

  • Why they chose to workout at home instead of a gym
  • The positive and negative points of working out at home
  • Their main concerns when working
  • The results they want to achieve

In general, we looked for questions to understand what is being done, what is being missed and, most importantly, what we could do to solve their pains. We answered this questions through desk research added to forms and interviews with both teachers and students.

Getting our hands dirty

Since we had nearly 6 months to deliver this product, we took our time to discuss each idea, coming up with an agreement of what everyone thought should be included in our app. We split the ideas into what we wanted to deliver, what would be nice to have, features that were cool but not necessary, and what was nice but probably would not be implemented. The features that were categorized as "wanted to deliver" still went through a MosCoW matrix, helping us set priorities for each task:

MosCoW Matrix

We then started working on creating our personas, followed by a Value Proposition Canvas:

Personas: Two student profiles + Teacher
Value Proposition Canvas: Students and Teachers

At a certain point, we realized our scope was not going to fit a whole section for teachers. Therefore, we decided to focus our target audience on the people who would use our app to workout, but still counting with the help of teachers who would prepare the exercises for the app.

The process followed by creating an empathy map and our business model canvas:

Empathy Map
Business Model Canvas

Benchmarking

Looking for products that offered similar features, we analysed:

  • The way they interact with users
  • If there are any instructions on posture/position when working out
  • If there is tracking on rights and wrongs during workouts
  • How they provide feedback for users during workouts
  • Strengths and weaknesses
EXO and Alpha AI
Vay and Fitwave

We found many products that offered video classes for workouts, but most of the apps we found that offered body tracking technologies were still in beta.

Design

After extensive research, we started working on all aspects of the design for our app. The first thing we did was decide our branding and style guide

Branding

After brainstorming names for our app, we all agreed on moov.ai.

Naming

Since we wanted to work with light and dark themes, we created two variations of color palletes:

Color palettes: light and dark modes

Style guide

Our typography constits on different variations of the font Poppins for the content, and the Tangerine font for our logo.

Typography

For the spacings, we followed the 8 rule:

Spacers

User flow

Before starting designing and coding any screens, we defined our user flow:

User flow

App Interface

In parallel with trying to find the best alternatives for programming the app, we worked on mockups, both for light and dark mode

Screen mockups for light and dark mode

The code

Planning

While the ones in charge for the design worked on the previous items, developers were searching for the frameworks that would best fit our project.

After this research, we spent some time deciding which framework to use. Each one had its weaknesses and strenghts, so even though we had made a decision, we created well-defined layers to separate the funcionality of each one. This allows switching to different frameworks with the least possible friccion, also making the maintanence of our app easier.

An example of these layers is the one we created with the intention of capturing tracking points from users' bodies. Currently, we are using Vision, but it can be easily be replaced by any other without major changes that would occur only in this layer.

Framework — Front end

We were uncertain whether we were using SwiftUI or UIKit, but we ended going for UIKit, mainly due to familiarity. We used ViewCode, creating a custom CodedView class, followed by a CodedViewLifeCycle class and a LayoutMaker protocol to optimize our work.

We've also implemented a Design System, used the Swift code generator for our assets, and componentized buttons and other views to avoid code repetition.

Onboarding

To guide new users through the app’s features and collect data that will be used to customize the user’s experience, we decided to have one screen on the onboarding to explain all Moov.AI has to offer. We wanted to make things more interesting with small animations and the perfect package for what's in the UIOnboarding package. It creates a beautiful overview, with animations and a clear explanation based on a specific configuration.

Custom Light/Dark mode

To further customize the UX, we decided to create an internal dark/light mode override. On a custom ViewController, called ThemeViewController, users can toggle between both modes, regardless of original system configuration. The persistence of the user's choice was created via UserDefaults.

Despite our dark/light mode trigger and override configuration, all views implement the default way to handle assets for both versions. Our app provides colors and assets via the default XCAssets specification, which makes the app automatically change everything when the appearance is toggled.

For more complex customizations, we also created an internal API using NotificationCenter to allow any object to subscribe to updates when the modes change. An example scenario in which this was useful was on the ThemeViewController itself, to change the state of our custom toggle whenever the user changes the mode, the constraints are updated using this custom trigger.

Light/Dark mode switch and Home screen in both modes

Memojis as avatars

We wanted to allow our user’s to set an avatar as the profile picture, and nothing is better than Memojis!

However, since the Memojis APIs are closed, we created a library with pre-made Memojis. Each Memoji is created from an image and a color, allowing the user to choose what better fits their choice.

Framework and logic

As mentioned in the previous section, we are using Vision to capture points form different parts of the body. These points are used to generate a "skeleton" of the user, allowing us to analyse which movements are being executed during the workout.

We have then created a generic condition system that allows us to use the workout analysis to decide whether exercises are performed correctly or not. The goal of this system is to simplify the management and creation of different exercises, considering the one condition can be observed in multiple exercises.

To illustrate, one of the conditions we've mapped is "right leg angle higher than x degrees". We used this condition for our first mapped workout, Standing Leg Abduction, but can be also used for any other exercise that observe "opening" or "closing" a leg.

Demonstration of the angle captured for this condition

To optimize the conditions system, we have created a logic to analyse and meterify the conditions that can be used in different cases. To create the “right leg angle higher than x degrees”, we used the metric "angle lower than x degrees", which can also be used for "left arm angle lower than x degrees".

Demonstration of left arm angle lower than x degrees

Machine Learning

We have implemented a machine learning model using Create ML to classify the poses the used could be in during calibration. This way, we are able to identify if the user is in the best position for calibrating, according to which exercise will be executed.

Looking forward to reducing the risk of issues related to different setups where the user might be, such as environment, lighting, clothing and other sorts of interference, we chose to capture images for out model using Person Segmentation.

Samples of our model

Another factor that might influence on body tracking is image distortion. Since we are using the phone camera to capture the image, depending on the angle the phone will be positioned there might be distortion on the borders of the image. This issue leads to a misunderstanding of body points location, making our detection less accurate.

Keeping in mind the less distortion, the better our body tracking will be, our solution was to find a way to make sure the user will position their phones in an angle that leads to the least possible distortion.

The solution was to provide an animation to provide instant feedback on the phone level. To make this possible, we used Core Motion.

Screenshots of the angle calibration

Sending feedback

To manage all alerts that can be sent to users during workouts, we've created a Notification Center Manager.

The purpose of this manager is to assure all relevant information will be provided to the user through visual and audio feedback. The app provides notifications both for wrong movements and general tips for workouts (e.g.: keeping a good posture or contracting abs). The manager decides which information is the most important/relevant according to the context before they are provided to the user.

Jenkins and Fastlane

To optimize the development, integration and continuous deliveries, we have developed a CI/CD system using Jenkins and Fastlane.

These tools make it possible to launch a new build version for both TestFlight and AppStore through a click. This system helps us save time, since the entire process is automated, and not dependant of supervision. Thefore, it can be performed by anyone, even those without knowledge about the publishing process.

--

--

Naomi Nakanishi
Apple Developer Academy PUCPR

27y - product designer and ios developer in the making. love talking, dancing and taking photos on my free time.