Lori's Upgrade - Facial Recognition
Hello everyone! In this blog, I will be discussing a path that I have added to my app using the MIT App Inventor Extensions - https://mit-cml.github.io/extensions/. Also expressing some of the unique challenges in adding too many paths to the app as I went too fast on the first week. Also sharing my emotions about this learning objective as well as any usage of ChatGPT.
Last week I added text-to-speech recognition, audio speech recognition, and web viewer speech recognition for my AI to recognize and give you both verbal and written responses to your questions. In this week I attempted to add facial recognition by adding a camera feature and an extension to recognize multiple facial expressions such as: Happy, sad, angry, curious, and excited. I was able to do so by adding multiple facial expressions in the memory area so that when the app is used, and the user took a picture of themselves, it could then recognize and give a percentage of accuracy of the facial expression as close to the naming convention in its memory. These were all created in the Personal Image Classifier found at https://classifier.appinventor.mit.edu/oldpic/
I used Create Emotion Recognition App video on YouTube from Krishna and was able to do all the steps as they were not as complex, however, I did have issues with the testing of my samples and not having a high accuracy percentage of those saved in the memory. Tip, if you decide to use this feature when you test, try to keep as close to exact facial features so the AI can find a good quality match. Also try to include 10 or more samples, as I did 5-6 samples for each.
I think with keeping my current voice speech recognition capability in my app, I will add for next week the Create a Voice Calculator App to Lori and when this is completed, I think I will have a more well-rounded AI Voice assistant.
Before deciding on what path, I was going to give my app since I already had a couple of paths, I felt that I was going to be able to knock this out of the park and thought it was going to be easy. Maybe it was mind playing with me that since I already did this, I would have no issues. Boy was I wrong. As I viewed a couple of videos, when I decided to use the facial expression as my next path, I realized that I couldn't figure out where I going to place it, as my app screen already looked pretty full. I experienced a mix of excitement for the new path chosen and frustration as to where is this going to be placed, asking myself, will it conflict with my current path and will it override my current path. During the process again I shared some frustration as I couldn't get the camera feature in the vertical position and all pictures are viewed horizontally as I said in the video, my facial expression used in the memory didn't provide high percentage numbers when I tested the app in the Personal Image Classifier. After completing this path, I felt a small sense of accomplishment as I was able to add a new feature, but it's not working optimally. I do have a sense of enthusiasm to keep going and try to fix my current problems and add one final path to my mobile app.
I did not use Chat GPT for any part of this blog or video or the creation of this path, as most information was provided in the YouTube videos made by Krishna.
Creating a mobile app has been a well-rounded experience, just as learning all the new technology tools presented in this course. As a final recommendation and conclusion, I would suggest keeping with a singular path and keep expanding and adding similar or associated paths that will augment your objective and not have multiple objectives.
Have a great week!
Ryan


Comments
Post a Comment