Categories
Mobile App Development

App: Microsoft Team’s Chat Evaluator

The macOS App I designed.

Note: This app was never published.

For one of my Master’s of Organizational Leadership courses, we were tasked with evaluating our groups performance as a team.

Evaluating the chat

During the course, we discussed using different methods and metrics to evaluate how well a team is working together. We were also tasked with creating our own set of team evaluation criteria to self evaluate our group. At the beginning of the class we established group communication expectations. One of those expectations was responding to each other’s queries within 48 hours. Because of this, I thought it would be an interesting experiment to create a program to analyze our groups compliance with our established 48 hour norm and have a Large Language Model (LLM) analyze our groups’ collaboration. Because most of our team interaction took place on a Microsoft Team’s chat, I decided to use the Team’s chat log as the content that would be used to evaluate our group.

Extracting the chat and metadata

The most difficult part of this task was actually getting the Team’s chat log into a ChatMessage object format that I could analyze. Microsoft Teams typically only allows an organization admin to export chat logs. To get around this, I developed a macOS app that could take in screenshots of our chat and use Optical Character Recognition (OCR) to extract the text content with high accuracy. Once the text was extracted, I looked for patterns in the extracted text to identify what parts were the message’s content, the date and time of the message, the message’s author, whether an individual was mentioned and whether or not the author was replying to another user. The function I wrote to analyze the extracted String for this information was extremely complex but in the end I was able to extract the metadata from each chat message and put it into a better form for further analysis.

Analyzing the Chat

Now that I had the chat in a format I could work with. I wrote code to determine how many messages were sent by each group member, the average response time between a chat member being mentioned and their next message (a response), and how many times they failed to respond within our group’s established communication expectation of 48 hours. This seemed to work quite well and after manually review, the accuracy of the app determining whether someone failed to respond in 48 hours was quite high.

When it came to analyzing the chat with an LLM. I utilized LLAMA 3, an open-source LLM created by Meta running on a MacBook Pro through Ollama. At first, I realized that the LLM had trouble with keeping up with the contest and learned that their was a ceiling for the amount of tokens (sort of like words) the LLM could accept at once and not lose track of context. To get around this, I ended up feeding the LLM each person’s chat history one at a time and was able to get summaries for each of their contributions. However, I was unable to get summaries of their interactions with others within the chat context as providing everything at once was not possible with this LLM.

Conclusion

This experiment was very unique and is definitely worth revisiting in the future especially as Large Language Models continue to improve.

Categories
Mobile App Development

CU Athlete App

The CU Athlete app allows for Colorado NCAA Athletes to easily log their fueling station attendance, check their Gold Card balances, RSVP and check in to department events and more! The app is used by every single Colorado student-athlete on every team daily.

Learn about how the app was created to version 3.0 (The current version is version 6) (Previously known as the “CU Buffs Fueling Station App”) (using Archive.org because when learning how to self-host I accidentally deleted my previous WordPress site).

Core Features: Athlete

The app allowed athletes to do the following:

-Check in for meals with one button press.
-RSVP for athletic department events with a single button press.
-Check into athletic department events using a rotating QR code resistant to spoofing (or sharing with friends).
-See their Gold Card (Flatiron Meal Plan) balance.
-Play a Ralphie run mini minigame.
-See the Fueling Station meal times.
-Receive announcements.
-Set a personal CU branded countdown widget for their events.
-Reliable even in situations with unstable or no internet.

Core Features: Staff

The CU Athlete Admin iPad app allowed staff to do the following:

-Create accounts for athletes (including via importing spreadsheets).
-See Fueling Station checkins in real-time and export data to spreadsheets.
-Allows staff to scan athlete’s QR codes for checking into events.
-Schedule athletic department events and see RSVP’s and checkins (and export those to spreadsheets).
-Set the meal times for the fueling station.

Categories
Mobile App Development

Christmas Trivia TV App

The Christmas Trivia TV app was created by me in 2019 and has been updated on an annual basis. The app is an automatic trivia game that cycles through trivia questions relating to Christmas with beautiful backgrounds to go along with. The app is designed for iPad, iPhone, Apple TV, and Mac. The app is best experienced on a large TV and is perfect to throw on for Christmas parties or family gatherings.

The app utilizes a Node.js REST server I self host that has the trivia questions stored in it (students in ATLAS Web are able to query this database server for their Javascript unit).

The app also has an in-app purchase called fireplace mode that puts a virtual fireplace (a looped recording of my own fireplace) on in the background.

Click here to download the app

Categories
Other Projects

Tor Browser Animated Explainer Video

I created an educational animated video that talks about the Tor Browser. The Tor Browser is the ultimate privacy tool for browsing the internet anonymously. It also is designed to get around internet censorship of all kinds to allow anyone to exercise their freedom to access information.

I animated the video using Adobe After Effects, and edited in audio and some other enhancements using Final Cut Pro. I also created over half of the assets using Affinity Photo and Affinity Designer (Photoshop and Illustrator alternatives). 

To learn more about the Tor Browser please visit The Tor Project’s website at https://www.torproject.org

Categories
Mobile App Development

App: ML Image Sorter

Made mostly obsolete by the release of Apple Intelligence but was helpful beforehand

ML Image Sorter was an app I designed in 2024 at the request of one of my former High School art teachers. He needed a way to sort through thousands of images in Apple Photos and place them into specific albums using very unique criteria such as the style of the image.

To assist him with his request, I thought that allowing him to train his own Machine Learning models on the specific sorting criteria he desired could help him.

Writing the app

Because he works in Apple Photos and already has a very detailed photo organization strategy, I decided to make a macOS app that could take a custom Image Classification model made in CreateML, find images that match the ML model’s criteria, and copy those photos into new Photos app albums named after the model’s classification. I also made the app compatible with sorting photos into folders if he were to decide to use it outside of the Apple Photos app.

Results

The app itself worked flawlessly. However, it was limited to the performance of the Image Classification Model that the user would manually create in Apple’s CreateML application (included with Xcode). I included instructions for my former teacher on how to create an accurate model by providing good model specimen. However, since he could only include so many examples, some classifications were difficult (especially models that attempted to classify images by very obscure differences). Since my primary audience was only one person, I could have done a better job of making it more visually appealing. That said, I got the app in a good enough state to be published on the Mac App Store.

Since publishing this, the Apple Photos app received some pretty hefty upgrades in the form of natural language searching support which allows for a user to search for more specific inquires. This largely makes my software obsolete. However, at the present moment it is still available on the Mac App Store.

App Store Link: https://apps.apple.com/us/app/ml-image-sorter/id6503262847?mt=12

Categories
Other Projects

Privacy for the Paranoid: An Educational Workshop

A poster I made for the workshop

In 2024, after reading Edward Snowden’s book Permanent Record and utilizing Tor extensively for Project GreenWisp (due to Tor’s ability to allow me to configure local devices behind Firewalls) I found myself doing tons of research on censorship, surveillance, and the privacy tools used to circumvent both. I learned so much that I felt passionate enough to create a workshop around it and present it to students in the Blow Things Up Lab (BTU) at CU Boulder.

Presentation Link:

Categories
Other Projects

Mini Project: The CUSPYS Voting System

A simple way to get accurate votes for CU Athletic’s award show

In 2024, as a president of the Student Athletic Advisory Committee (SAAC) I was in charge with organizing many aspects of the CU Sports Person of the Year Awards (CUSPYS). One of those aspects was voting for moment of the year. Moment of the year is unique in that unlike the other awards, the winner is decided by popular vote at the CUSPYS event itself rather than ahead of time. I decided I would take up creating the voting system for Moment of the year using HTML/CSS and JavaScript with a Node.js backend.

Previous Voter Fraud

Previously, there were issues where athletes would vote multiple times for the winner. At other times, the voting system was set up in a way that made it difficult for athletes to vote due to a login.

My solution: Security through feigning obliviousness

While designing the system I needed a way to ensure that athletes could not vote twice while at the same time keeping it easy to vote. I did not want to make a login as it would have taken up too much time for athletes to login and they may not know credentials. Instead, I tracked voting records through using browser cookies on the user’s device. When a user would vote a cookie would be saved on their device that they had already voted. This would make it so that the server would know if a user submitting the form had already voted.

However, the cookie technique has a problem. A user can simply open up a private browser tab or clear their cookies to vote again. So I came up with an idea: What if I gave no indication that I was keeping track of who voted? To do this, I intentionally made it so that the submission success page gave no indication as to whether or not a person was voting their first time or their 15th time. From an individual wishing to vote multiple time’s perspective, it would look like voting multiple times would be as simple as resubmitting the form over and over again despite only their first vote being recorded on the server. In a sense, I was feigning not being aware of a potential security design flaw to encourage attackers to attack in a way that I did indeed secure as opposed to using other methods.

Wrap up

In the end, the method I picked to secure the voting system worked seemingly flawlessly. We were able to see the winner of the Moment of the Year being the Ski Team winning the National Championships!

Categories
Uncategorized

TuneScore: Music Flashcards

A music app I made with a friend in less than 6 hours.

TuneScore is an educational app that allows a user to play displayed notes using an instrument and see if they were correct with what they played. The app utilizes the Tuna library to be able to determine what note is being played based on microphone input.

Working with a friend

This project was a class project for my friend Joelle McDonald. She enlisted my help due to my previous experience with iOS development. She laid out the requirements for the app as well as the designs for the iconography and other assets while I wrote the logic in Swift while using SwiftUI for the layout engine.

TuneScore supports a variety of different instruments and allows users to adjust whether they want to see questions with flat or sharp notes. They can also customize whether to see questions with the treble or bass clef and the max number of ledger lines a question can display.

Potential Future Features

If I were to add features going forward to this app I would consider adding the following:

  • The ability for the device to play the correct note.
  • A higher or lower feedback system to help the user.
  • How to play a particular note on a specific instrument.

You can download the app for iOS, iPadOS and macOS here: https://apps.apple.com/us/app/tunescore-music-flashcards/id6738848208

Categories
Mobile App Development

Project GreenWisp: Digital Waste Assistant. Documentation

Our video

Project summary

Project GreenWisp is an easy to use affordable digital waste assistant iPad app designed for high traffic waste stations. GreenWisp aims to reduce recycling and compost contamination through the use colorful animated waste characters and modern object detection technologies. 

Background

Boulder County’s recycling guide

In late 2022, I learned that CU was eliminating non-food compost from their waste systems. The reason for this was a change in policy from A1 Organics, a major local compost processor for the Boulder area. This was due to high contamination rates of compost waste. Having learned this and knowing that I myself struggle to know what goes in compost, trash and recycling bins, I thought it would be an interesting experiment to utilize machine learning technologies to potentially create a system that could tell a user where items go. 

Spring 2023 Machine Learning Independent study

Project GreenWisp Spring 2023 demo

View full writeup here

During Spring of 2023, I pursued an independent study to learn about machine learning technologies for this task with a focus on object detection. Over the course of the semester, I attached a camera and a computer to a high traffic waste station found in the athletic department dining hall. I had the camera take photos of waste items as people were disposing of them via motion detection. I collected upwards of 800,000 photos and practiced using tools such as TensorfFow and more. With these photos, I took a few hundred of them and labeled objects found in the photos as either trash, compost, or recycling. I had the goal that at the end of the semester, I would attach a display to the waste station that would show classifying bounding boxes on items as they are being thrown out. As the end of the semester drew near, I was finding it very difficult to get TensorFlow to the work well enough to reach my goal so I opted to use the Apple’s CreateML API to classify 3 item categories: Trash, Food, and reusable plates. Create ML enabled me to quickly deploy a proof of concept Mac app for a few weeks at the end of the semester. Despite some bugs with coordinate drawing on the small monitor I used, reception was good although I noticed quite a few issues.

Issues from the proof of concept

The ML model identifying a napkin as trash (the coordinate system had some issues which is why it is not aligned properly)

One of the main problems with my approach was that training a model to identify waste categories would only allow me to use that model at one particular location. Items that are recyclable or composable in one location may actually contaminate those same bins in other locations and should be thrown in the landfill instead. Another issue was that given my limited labeling resources, labeling thousands of items to produce a highly accurate machine learning model given that waste items are frequently of different sizes, shapes and colors was not feasible. If I couldn’t make an accurate model for a system that relies entirely on ML, I wouldn’t be able to help reduce contamination.

Fast forward to Fall of 2023

One of our precedents: The Oscar Sort AI waste bin

Having identified some flaws with my previous approach towards this idea, I brainstormed a new approach that addresses these concerns. During our precedent research phase of the capstone process, I found that earlier in 2023 a product was launched called Oscar Sort. This product had a similar setup to what I had previously envisioned that used machine learning to identify waste objects. Another precedent I found was the EvoBin. This system did not use artificial intelligence at all but rather was a screen designed to be placed above waste stations that cycled through a prerecorded video with images of specific items to that location and which bin they belong in. The EvoBin would also occasionally show trivia or fun facts to keep users interested. When looking at these precedents what stuck out to me the most was how all of the precedents we researched required additional hardware to be installed by professionals and were very expensive with some of them costing tens of thousands of dollars. All of the precedents also seemed to use the same realistic non-animated images for displaying waste items.

Brainstorming

An example of Japanese Kawaii style characters (these two were used for the 2020 Olympic Games)

After completing the precedent research phase, I began brainstorming how I could create something that achieves something similar to the precedents I identified while doing it in a novel fashion that addresses their shortcomings. The most significant shortcoming in my opinion was the high cost of hardware found in these precedents. Having previously used Apple’s object detection technologies for my first attempt at this project last year, I thought it would be interesting to create an application that could be run entirely on an iPad. By running the app on an iPad, I could achieve similar results for a significantly lower price and with no additional hardware required. Despite the easy to use Apple CreateML suite, knowing my limited ability to label tons of data on my own for highly accurate object detection models for waste, I thought that it would be interesting to take EvoBin’s approach of creating engaging digital signage as the first priority for project but spice it up with less “industrial” style images that all of the precedents have. I have always been interested in the cute style of some Japanese animated characters such as Pokemon. This style is commonly referred to as Kawaii style and to me, I always felt like characters drawn in this style would get my attention. I thought perhaps I could incorporate that kind of character style into this project. As for the AI features, I decided that object detection of items could still be present in the project but it would take a secondary role in which select items that could be successfully identified would have an additional animation to tell the user what item they currently are holding.

My logo

What’s in the name?

The company name I use for all of my technology services is CyberWisp with the word wisp being a synonym for a ghost. My branding assets include some cute and colorful ghost like characters. Given that this project also has cute characters, I could continue the naming pattern with the name GreenWisp. Having done some additional brainstorming, I had come up with a general structure for the project and settled on 4 key goals.

4 Key goals

Design an iPad app that:

  1. Utilizes cute kawaii style animated characters to get the attention of passerby throwing away waste to help them accurate put their waste in the correct bin.
  2. Incorporate machine learning object detection to identify items in real time when feasible.
  3. Allow for easy customization by waste management administrators for their specific waste station configuration. 
  4. Require no additional hardware outside of an iPad.

Getting into groups

Because I have around 5 years of iOS mobile app experience, I felt pretty confident to tackle the technology challenges on my own. However, my artistic ability to create and animate original cute looking characters that don’t look like they want to eat your soul was lacking. That’s why I sought out Ian Rowland and Nik Madhu to be in charge of creating the character animations. Both Nik and Ian had experience with Adobe After Effects and were equally passionate about the using their skills to work towards a project designed to fight climate change.

Diagrams

A VERY early diagram of the waste screen.

Before building an app, I usually will mock up what the app will look like that way I can start to figure out how the pieces will fit together from a user interface perspective and a coding perspective. We based the design of our sketches off of ideas from precedents such as EvoBin and Oscar Sort. We opted for a column interface where each column represented a particular waste category such as trash, compost or recycling. Since waste stations can vary by the number of categorical bins and the position of those bins, we needed to allow for easy customization. We quickly found that the screen real estate of a 10.9 inch iPad device is limited for all the information we want to display to be easily read from an arms length so every design decision had to be carefully made and some features had to be cut. One of the original ideas was incorporating Evo Bin’s trivia and other fast facts into the app but we quickly found that this would be very difficult to do because of the limited screen real estate. We decided on displaying three items at once per category and allowing up to four categories of waste to be displayed at one time. We also included arrows in our diagrams that could be customized to point in the direction of a particular bin. 

Acquiring Materials

Apple”s specs for the iPad 10th generation.

Apple has numerous iPads available for sale each with different screen sizes and capabilities. Since one of my project’s focuses is affordability, I thought it would be wise to design my app targeting one of the cheaper iPads available rather than the high end iPad Pro’s. By doing this I could ensure that the app could work on as many devices as possible. After doing some research, I found that the 2nd most affordable iPad has an ultra wide angle front facing camera that could be very useful for classifying items in front of a wide waste station. I ended up purchasing a 10.9 inch iPad as well as a high speed SSD drive for moving large amounts of images when training machine learning models.

Firing up Xcode.

Since I was creating an iPad app (with the additional ability to run on macOS), all of the code was going to be written in Xcode using the Swift programming language. I also decided to use SwiftUI to create the views (which took a little bit to get used to since most of my expertise has been with working with UIKit). As a mobile app developer, one of my first tasks when starting a new app is establishing how the app’s data will be structured and accessed so I don’t have to rewrite and rethink large amounts of logic later.

Structuring the data

When structuring the data, I concluded that the waste columns were by far the most important data structure in the app. Each column would represent a particular waste category such as trash compost or recycling. Waste admins can name these columns from the configuration menu as well as set the background color. Most importantly, items can be assigned to each column. Since I was going to be working with animations, each column object would also manage a SpriteKit view (or SKView) to control animations. 

For saving the data, I figured I would use a single JSON file that would be loaded whenever the app starts.

Starting the UI: Configuration Screen

Before I could work on the UI that would display items, I had to build a method of creating waste categories and assigning items to those categories. I decided to do this by designing the settings screen first. The settings screen shows all of the waste items that are available to categorize and allows you to create a waste column, customize the color, name and which items belong to that particular category. I used quite a few SwiftUI lists to allow easy selection of data. The configuration screen would be accessed by a simple tap from the main screen.

Displaying items

Now that the configuration screen was complete, my next task was displaying items on the screen. Because we were limited in screen size, I thought it would make sense to show only 3 items at once per category. Items would be displayed with their animation (or a static image at this point in time) as well as a label of what the item is below it. Categories with more than 3 items would rotate out items out every 5 seconds by fading the items. I was able to achieve this by creating “waste groups”, essentially groups would reference up to 3 items in a category and would be able to rotate through them. This would also allow me to force a particular waste group to appear at any time which would be needed later for object detection.

Dropping the arrows and the animated backgrounds

The old arrows and backgrounds.

While my primary job in this project was working on more of the technical pieces and UI, I did have to make some aesthetic design decisions. Originally I thought it would be useful to have arrows present in the app that could be set to a specific direction referencing the position of the actual waste station. Over time, I found that the arrows were taking up too much screen space so I decided to remove them. In place of the arrows, I added in waste bin images that would take a fraction of the space at the bottom of the screen (and looked significantly better). Additionally, I had placed an animated SpriteKit background effect for every column that was a bit distracting. I ended up replacing those animations with a wooden texture and coloring that according to the user’s specified column color.

Adding in animations

A screen recording of the animations added.

Now that Ian and Nik had some animations complete, I needed to add them into the app. Because each category has a background color, I could not simply loop a video of each character’s animation as .mp4, .mov and other video formats don’t support transparency. Instead, I exported a sequence of png images from each character’s After Effects file and added them into Xcode to be loaded as a Sprite’s texture atlas.

Object Detection

A photo taken and labeled using RectLabel Pro of a Coffee Cup to train an object detection model.

One of the main goals of this project was to add waste item recognition capabilities using object detection. At this point in time, the app was able to successfully cycle through animations of items but had no object detection features yet. Since we were planning on doing our user testing in the CU Boulder Athletic’s dining hall, I thought I would target two common contaminant items from the Fueling Stations: black togo containers and Ozo coffee cups. To get the machine learning model working I took around 150 photos of these two items in different positions. Because the iPad was going to be mounted on the Fueling Station waste station, I put the iPad up on the waste bin to take the photos. I used an automatic photo taker app to take the photos in quick succession with a 3 seconds delay in between each photo for me to rotate the item into a unique position. I then labeled them with RectLabel by drawing bounding boxes on these images. 

The CreateML training program.
The model classifying a never before seen image.

The process of labeling images is very tedious and took a couple of hours as I wanted to ensure that the bounding boxes were tight around the items to ensure the greatest accuracy. After labeling them, I imported the images into CreateML and my computer whirled up its fans to train the object detection model. The resulting model seemed to have good accuracy according to CreateML. 

Incorporating the ML model into the app

Video of the object detection animation.

Now that I had a trained model file, I needed to get the model to work in the app. For IOS platforms, using camera features generally requires you to use AVFoundation. This is one of the most confusing frameworks that I have worked with on iOS. Luckily, Apple had an example app called Breakfast Finder available that could identify certain foods via object detection. I was able to repurpose some of Breakfast finder’s code and attempt to figure out how it works. After a bit of trail and error, I successfully got the debug console to print the detection inferences. Interestingly, unless you set the confidence level up a bit, the console will repeatedly spam random detections. This single variable was something that I would need to play around with. 

Getting the console to print detections was nice but I needed something to happen on the characters when the item the character represents was detected that the user would be able to see. I decided an easy animation to do would be to have the characters grow and shrink in size repeatedly when an item was detected. To achieve this, I had the app look for items via the ultra wide camera and then if one is detected, determine which waste column it belonged in and then force that column to show that item on the screen and trigger the animation for a minimum 0.5 seconds with the animation looping if that item remains detected. 

Waste Messages

Two persistent waste messages displayed. FOOD ONLY and Everything Else!

While my goal was to include a large swath of items to display, I realized that I might not be able to animate every item. Additionally, some users might just be in such a big rush that they don’t have time to look at the items that belong in each bin. To attempt to solve these two issues, I added customizable labels to the app. These labels would be created by a waste admin and had two modes: persistent, or rotating. In persistent mode, these labels would always be displayed on the screen in place of one of the animated items. Waste admin’s can set the position of the persistent labels to be on the top of the screen middle or bottom. Persistent labels are useful for messages such as “TRASH ONLY” or “NO FOOD” In rotation mode, the labels act like items just without an image. The labels will rotate as if they were another item. 

Deploying a demo into the athletic department

The demo setup in the athletic department.

During our capstone class, we had to do multiple forms of user testing. For one of our user tests, we decided to observe athletes in the athletic department interacting with the GreenWisp system and gather feedback from observations or interviewing athletes directly. Over spring break I set up the iPad on a waste station in the athletic department. I achieved this by using some industrial Velcro, and running an extension cord from the nearest outlet to the iPad. 

After setting up the system, we briefly interviewed athletes in the fueling station about their experiences. We interviewed some who were at their tables, some just before throwing items away, and others just after throwing items away. We received the following testimonials:

  • Some individuals used another waste bin and did not interact with the system.
  • Most individuals thought that the characters were cute and not uncanny. 
  • Most individuals said that they have looked at the system for help in sorting their waste at least once in the past week. 
  • A few individuals said they successfully had their items identified by the system. 
  • Feedback of the system was positive.

Other things we took away from this experience:

  • Users liked the character designs
  • We could improve our ability to get user’s attention potentially through identifying when they walk up. 
  • The object detection works but is a bit limited due to only supporting 4 items currently. 
  • We could improve the animation when an item is detected so that it can get the users attention better. 
  • Users only spend a small amount of time at the waste bin even when interacting with the system. 
  • Some users don’t know about the ability for the iPad to detect items. We should potentially communicate that better.

Import and exporting configuration

The configuration JSON.

With the focus on adding additional customization options to the app, it became obvious that someone configuring the system might want an easy way to copy their preferences over to another iPad running GreenWisp. Because I had previously setup everything to be saved as one giant JSON file, it was super easy to add the ability to import and export the configuration to send to another device. I initially considered doing this via a QR code, but unfortunately, the JSON files were way too large. In the future, I could potentially use a QR code by having the device upload the configuration to a server and then giving a generated link as a QR code for devices to download that user’s configuration.

Fixing the RAM issue

Xcode’s Debug Inspector RAM page.

As Ian and Nik continued to push out expressive animations of different waste characters for me to add to the app, I eventually ran into an issue with the app started to crash within a few seconds after launch. I quickly determined that this was an issue relating to running out of memory on the iPads. The iPad we were using has a ram limit of around 4 GB and we were quickly exceeding that threshold due to having so many high-quality animations present in the app. I tried a few methods to fix this, one of which involved offloading all of the SpriteKit animation frames whenever an item was rotated out from being displayed. This fixed the ram issue but caused animations to be extremely choppy and lots of frames to drop. I realized that I needed to make the file size of each items png animation frame smaller. So I went back into adobe after effects and re-exported every single item with a lower quality on the PNG sequence images. This process took some time, but eventually I was able to significantly reduce the amount of ram used without dropping frames.

Password locking the configuration page. 

After deploying a demo into the athletic department, I would occasionally come to a meal to see that the app was stuck on the settings page due to someone attempting to interact with the iPad via touch. I needed a way to prevent non waste admins from accidentally opening this page and changing settings. I added in a password lock feature. Waste admin can optionally lock the configuration screen with a password that they can set. When a password is set, tapping the screen, when it is showing waste items will display a password prompt. If no action is taken within 15 seconds the prompt will disappear. 

Final tweaks

Towards the end of the semester, I worked on adding additional items for waste classification. I was able to train a model that could now successfully identify chip bags and cartons in addition to the previous togo container and coffee cup. I also continued to add in new character animations as Nik and Ian completed them.

What’s Next

As the semester concludes I plan to continue this project in the future. I aim to release this app on the App Store within the next few months. Before releasing I still need to:

  • Fix some occasional crashes
  • Add in the ability to switch cameras
  • Add in a camera preview feature for seeing what the camera sees for waste identification.
  • Allow for enabling and disabling object detection features for specific items.
  • Have an additional animation or state change when the device detects an approaching person.
  • Some sort of tip or interesting fact feature.
  • Expansion of detectable items.
  • Create marketing materials.

Categories
Uncategorized

Final Project

For our final project, we were given the choice to choose a material of our choice and create a thing with it. 

Brainstorming

During the brainstorming process, I struggled to figure out what I should do. Possible ideas revolved around Christmas gifts for others, electronic wooden blocks, and useful items such as a laundry basket for the men’s track locker room. While many of these ideas were great, they were relatively complex or large and would have required a significant amount of time to complete. With a suggestion from my mom, I Googled some beginner woodworking projects and came across a guide for a mountain shaped coat/key hanger. I chose to use wood for its ease of cutting and the texture for this project (also because the project specified using wood). I specifically chose to use plywood due to the texture of it and the fact that it was one of the thicker pieces available in the BTU. Plywood is usually primarily made of a bunch of slabs of spruce, pine and fir glued together and is sometimes known as SPF wood. It is very affordable and is used for many applications such as walls, roofing, and sheds.

The wooden mountain coat hanger by Ana White is a simple wooden item made through cutting out wood in the shape of mountains, attaching dowels, and painting the wood to resemble the beautiful mountains of Colorado. 

Starting the climb

I first obtained a piece of plywood wood and used the table saw to cut it to the specified dimensions in the guide. I disregarded the thickness of the wood and only made sure that the piece I used seemed thick enough to hold 3 coats. Next I drew cut lines using a ruler and a construction square so I could cut the mountain shapes. I struggled to make the measurements for the lines exactly as specified in the instructions due to a lack of angle measurements for the triangle shaped mountains. Nevertheless, I ensured that the heights of the mountains relative to one another were the same as in the instructions. I used the band saw to make the cut on the wood lines. I then used the power sander to sand off some of my imperfections.

The guide specified that I should use a 3/8 drill bit to create the screw holes for attaching the dowels. I thought this was unusually large for a screw hole especially because the instructions specified using a miter to drill a hole for the dowels. I could not find any miters in the BTU lab, but I decided that I could probably avoid using miters since the 3/8th hole was quite large. I used the drill press to make 3 holes of 3/8th in diameter in the piece.

Pitstop at McGuckins 

I then journeyed to McGuckin’s Hardware to buy some dowels and screws to attach the dowels to the piece. I purchased 3 brass screws of 1 1/2 inches in length and 1/4 inches in diameter. I also purchased a large dowel for me to cut that was an inch in diameter. 

Assembling the hanger.

To cut the dowels, I marked them for 2 inches in length with a pencil. I tried using the table saw. Unfortunately, this launched my newly cut dowels backwards like Angry Birds multiple times causing some fragments in the cut. I decided to recut the dowels using the band saw for a much better and less deadly experience. To attach the dowels, I used a power drill and a bit that was smaller than the screw’s diameter to drill straight down to help guide the screws in. I then used the power drill with a screw head to screw and connect the dowel and the mountain shaped base together. It took me a couple tries of removing and reinserting the screws to get them to go straight down but I eventually was successful. I then used a very small drill bit to cut two more holes in the piece for nails to be used to attach the finished piece to a wall.

Finishing up

My final step was to apply a finish to the wood as well as paint the wood. I used a brown finish from the BTU lab to coat the item. While I could have applied a second coat to get the darker texture advertised on the bucket of finish, I thought the light brown looked good. I then used white paint to paint the snow on top of the mountain.

Ta-da