tanosugi
tanosugi's blog

tanosugi's blog

Voitrain - English words pronouncing training service made with AWS Amplify

Voitrain - English words pronouncing training service made with AWS Amplify

Article for AWS Amplify on 2022 September's Hackathon Submission

tanosugi's photo
tanosugi
·Sep 23, 2022·

9 min read

Table of contents

Introduction

Voitrain is the word pronouncing training service. You can choose the preset card set or create your own card, practice by hearing the pronunciation, and record and check your pronunciation.

image.png

Check out Voitrain - English words pronouncing training service made with AWS Amplify

1. The Problem

We know there are already many card apps on the market. Some of them just show words, and some of them show words and pictures. They lack in one big thing, let users pronounce the word and check the pronunciations. If users remember with wrong pronunciations, it is very difficult to correct them. It is necessary to remember words and pronunciations at the same time.

2. The Solution and Motivation

If there is an app you can see the word, hear the pronunciations and check your pronunciation at one time, it is helpful for whom learns the words. In addition to those features, it is more useful if users can create their own word cards. Hence, I decided to build the app by myself!!

I will submit the app and code to AWS Amplify in the 2022 September Hackathon. When I saw the video which introduce the Figma to React feature of Amplify Studio, I was so amazed and I feel really wanted to learn how to use it and try it to confirm it is useful for prototyping. That is why I build the app with AWS Amplify and join the hackathon.

3. Demo and GitHub Repository

Demo: (voitrain.net)

GitHub Repository: (github.com/tanosugi/voitrain)

4.Feature of Voitrain

Repetition is the best way to remember English words. With Voitrain, you can memorize words using a preset card set and a card set you create by yourself.

4.1 Preset Card Set

Get started right away with a preset card set of fruits, body parts, and more. After listening to the pronunciation, you can pronounce it by yourself to check if your pronunciation is correct.

image.png

4.2 Your Own Card Set

You can create a card set for the words you want to practice. The created card set can be used in the same way as the preset. After listening to the pronunciation of a word, you can pronounce it by yourself to see if it matches. You can change pictures by inputting the URL or just clicking the “update image” icon to obtain the random photo from unsplash.com.

image.png

image.png

5. Amplify Feature I used in Voitrain

I found so many blogs that just try the Figma to react feature and most of them just show pictures as card collections. I would like to confirm that Figma to React is useful for prototyping and is enough for MVP and MSP (minimum salable product). After building Voitrain, I confirmed it is more than enough for MVP and MSP.

5.1. Figma to react (UI Library) on Amplify Studio

Firstly, I made a Figma file. Figma is GUI software and has many features for web and app design, hence it is very useful to visualize what I think. When I used Material UI library of React, it is difficult to visualize what I think, but Figma to React are much easier. The only weakness of Figma to React is that if users do not have design knowledge and made strange designs, it affects directly the app. When I used other UI libraries, the design is corrected through the step to coding from the design image. Hence, we need to learn more about design when we use Figma to React.

Figma File

image.png

I also use the prototype feature of Figma. If I am used to using Figma,

  • Figma file -> prototype on Figma -> coding with React.

But honestly, I am not used to Figma, hence I made as follows.

  • Figma files -> coding with React -> Figma prototype.

Figma Prototype

image.png

When I just show data as a card or show a modal to create or update data, I do not have to consider Data Store because it's wrapped in Amplify Figma to React.

image.png

Above page are shown from following very simple code.

import { Authenticator } from "@aws-amplify/ui-react";
import { Hub } from "aws-amplify";
import { useState } from "react";
import Modal from "react-modal";
import useQueryCardSetFromId from "../src/hooks/useQueryCardSetFromId";
import Center from "../src/layout/center";
import Layout from "../src/layout/layout";
import { customStyles } from "../src/layout/modalStyle";
import {
  CardSetCreateView,
  CardSetViewCollection,
  Pluscircle
} from "../src/ui-components";
import TabbarMyCardsChosenView from "../src/ui-components/TabbarMyCardsChosenView";

const Home = () => {
  const [modalToOpen, setModalToOpen] = useState("");
  const { cardSet } = useQueryCardSetFromId("");
  Hub.listen("ui", (capsule) => {
    if (
      [
        "actions:datastore:create:finished",
        "actions:datastore:update:finished",
      ].includes(capsule.payload.event)
    ) {
      setModalToOpen("");
    }
  });
  return (
    <Authenticator>
        <Layout>
          <>
            <Center>
              <TabbarMyCardsChosenView />
            </Center>
            <Center>
              <Pluscircle
                overrides={{
                  Pluscircle: {
                    onClick: () => {
                      setModalToOpen("CardCreateView");
                    },
                    margin: "20px 0px 20px 0px",
                  },
                }}
              />
            </Center>
            <Modal
              isOpen={modalToOpen == "CardCreateView"}
              style={customStyles}
            >
              <Center>
                {cardSet && (
                  <CardSetCreateView
                    cardSet={cardSet}
                    overrides={{
                      close: {
                        onClick: () => setModalToOpen(""),
                      },
                    }}
                  />
                )}
              </Center>
            </Modal>
            <Center>
              <CardSetViewCollection />
            </Center>
          </>
        </Layout>
    </Authenticator>
  );
};

export default Home;

5.2. Data modeling on Amplify Studio

When I tried making data modeling with Amplify without Amplify studio, setting authorization rules are difficult to learn. Data modeling on Amplify Studio is much easier than coding base data modeling.

image.png

I use the "Enable owner authorization" feature for the CardSet model which is used for card sets for each user to create and edit, other users can not access other users' card sets.

image.png

For the PresetCardSet data model, I chose "Any signed-in users authenticated with Cognito User Pool can Read PresetCardSet"

image.png

5.3. Data Content Management on Amplify Studio

When I made a preset card set for 20 fruits' names and 20 body parts' names, I use Data Content Management on Amplify Studio. You can create or edit each item and download a CSV file. If you want to generate data items with 20 first names or some normal sample, you can generate items automatically. I think it is more useful if there is a feature to update from CSV or edit in excel like a table in the future.

image.png

5.4. Authentication on Amplify Studio

Introducing Authentication was very easy. You can click on Amplify studio GUI and paste code from the tutorial. It was a little complicated to use google login through OAuth 2. The following three steps are necessary.

  1. add Google Login on Amplify Studio
  2. add credentials on GCP
  3. Copy redirect URL from Amplify Studio to GCP
  4. Copy Web Client ID and Web Client Secret from GCP to Amplify
  5. set environment variable on AWS management console. Web Client from GCP to AMPLIFY_GOOGLE_CLIENT_ID and Web Client Secret from GCP to AMPLIFY_GOOGLE_CLIENT_SECRET Please refer to the following screenshots. image.png

5.5. Text to Speach, Speach to Text (Amplify Predictions)

I refer the following article and used Amplify Prediction for Text to Speach (Amazon Polly), and Speach to Text (Amazon Transcribe). I should modify the default IAM for Amplify to access rights for Polly and Transcribe. Building a Real-Time Speech to Text React Application

image.png

5.6. Amplify Data Store

When I modify data programmatically, I need to use Data Store. When the user clicks the "update image" icon, the image of the word is changed based on Unsplash API.

image.png

import { DataStore } from "aws-amplify";
import useAxios from "axios-hooks";
import { useEffect, useState } from "react";
import { CardSet } from "../models";
import { unsplashApiKeyRandom } from "../utils/envvar";

const useQueryCardsFromCardSetId = (cardSetId: string) => {
  const [cardSet, setCardSet] = useState<CardSet>();
  const [
    { data: unsplashData, loading: unsplashLoading, error: unsplashError },
    executeUnsplash,
  ] = useAxios({
    url:
      "https://api.unsplash.com/photos/random?query=" +
      cardSet?.name +
      "&client_id=" +
      unsplashApiKeyRandom,
  });
  const fetchCards = async () => {
    if (cardSetId) {
      const respCardSet = await DataStore.query(CardSet, cardSetId);
      if (respCardSet) {
        setCardSet(respCardSet);
      }
    } else if (cardSetId == "") {
      setCardSet(new CardSet({ name: "", image_url: "" }));
    }
  };
  const updateCardSetImageUrl = async () => {
    try {
      await executeUnsplash();
    } catch (e) {
      alert(e);
    }
    DataStore.save(
      CardSet.copyOf(cardSet || new CardSet({}), (updated) => {
        updated.image_url = unsplashData?.urls?.small;
      })
    );
  };
  useEffect(() => {
    fetchCards();
    const subscription = DataStore.observe(CardSet).subscribe(fetchCards);
    return () => {
      subscription.unsubscribe();
    };
    // eslint-disable-next-line react-hooks/exhaustive-deps
  }, [cardSetId]);
  return {
    unsplashData,
    cardSet,
    updateCardSetImageUrl,
    executeUnsplash,
  };
};

export default useQueryCardsFromCardSetId;

5.7. Amplify hosting and Amplify data model

It is amazingly easy! I build an app with React with GraphQL backend, I need to build S3, CloudFront for frontend, Apprunner for the backend, and RDS for the database. I coded with terraform for code as infrastructure and made deploy flow with CircleCi. It takes one month to build them.

Amplify hosting and data model replace most of those hard work. I understand the manual way with terraform and CircleCi is more flexible but I expect that Amplify covers more than half of the development case with both frontend and backend features.

5.8. Amplify Domain management

I purchase the original domain and register in Route 53, configured with the domain management feature of Amplify on the AWS management console.

image.png

6. Other Technology Used in Voitrain

6.1. Unsplash API combined with Axios-hooks in the own hooks

I used Unsplash APIs get a random photo feature to set the picture for the card set and cards. You can

6.2. Next js

I could build and deploy Next js easily on Amplify Hosting.

6.3. Audio Object of JavaScript

When you check your pronunciation on the app and your answer is correct, you can hear the sound "ping pong!". If your answer is incorrect, you can hear "booboo!". I used Audio Object of JavaScript for the sounds. You can have fun if you try Voitrain with your kids or friends. image.png

6.4. react-modal

I used react-modal combined with Amplify Figma to React feature to show the edit window.

image.png

6.5. Google Analytics, LogRocket, and Sentry

I use Google Analytics, LogRocket, and Sentry to analyze users' actions and errors.

7. About Me

My name is Tanosugi I am from Japan and living in Japan. I worked part-time using Visual C++ when I was a student long years ago, but now I have a non-engineering job. 2-3 years ago, I read a book published in Japan, "Let's start development by yourself!" and wanted to practice it, so I resumed coding as a hobby. After studying React, Django, AWS, etc. at Udemy, I have been making various web services by myself. Some of the services are for my kids.

8. Conclusion

I coded every day after the kids went to bed, so that's a total of 40-50 hours in 2 weeks, including article writing. If it is an intensive hackathon, it takes 3-4 days.

I was able to create the application, I could join the hackathon, and I was able to confirm the usefulness of Amplify, so it was a very meaningful time for me.

 
Share this