Skip to main content

· One min read
Aaron Zheng

Table of Contents:

  • Overview
  • Example Pictures of Robot
  • My contributions

Overview

Anodroid is a 12-DOF humanoid-robot of height 54cm(22 inch). It is an integrated machine that can move around in flat and tilted surfaces.

Exemplar Pictures of the Robot

Image

My Contributions

This project was independently designed and built by me as part of a personal project. During the development of Anodroid, much inspiration was derived from the Poppy project.

· 4 min read
Aaron Zheng

UGIS 192D, Fall 2022

Aaron Zheng

Table of Contents:

  • Overview
  • Example Photos
  • My contributions
    • Development of New VSCode Extension
    • Identifying bugs and issues
  • What I learnt

Overview

JIPCAD (Joint Interactive Procedural Computer-Aided Design) is a new computer-aided design software tool that supports both procedural and interactive modeling. Procedural modeling allows users to use NOME, JIPCAD's proprietary programming language to sketch 3D models. On the other hand, users can also use the user interface to model 3D objects, such as by adding new faces and polylines through the interactive program.

JIPCAD is a design tool that came to being from the JIPCAD project, a project initiated and developed by Professor Carlo Sequin and his team of researchers at UC Berkeley. JIPCAD is specifically built for the modeling of 2-manifold free-form surfaces of high complexity and inherent regularity, like the Mobius Strip, or sculptures by artists such as Eva Hild or Charles O. Perry.

Example photos of modeling using JIPCAD

Cable Knot Torus Spinal Knot

My Contributions

Backend support for the NOME Language

During my time as a researcher for the JIPCAD project, I was able to develop the first version (0.0.0) of a VSCode extension for the proprietary NOME JIPCAD language. The features that I managed to include were the following:

  • Autocompletion of commands

I was able to implement autocompletion features for codeblocks, so that once codeblocks are typed into a code window, the corresponding closing codeblock will appear automatically.

  • Syntax coloring

I was also able to implement syntax-coloring, specifically the colouring of variables, commands, comments, and parameters. This allows for a more user-friendly interface, as developers using the NOME language can now know what each section of their code represents.

  • Commenting

The NOME JIPCAD extension also has the ability of toggling block comments. In addition, with the extension, commenting using the corresponding opening/closing pair (* and *) is enabled for all files with the .nom, .jipcad suffix.

  • Running of Nome Executable

As demonstrated above, the NOME extension allows developers to run the NOME executable without having to use file explorer or navigate directories terminal. Instead, the extension includes a custom command on VSCode, which allows the NOME executable to be opened and executed.

  • Customize directory of NOME executableTying back to previous functionality, the NOME VSCode extension allows developers to input a *customised path* of the NOME executable. The default path is the JIPCAD directory located as a subdirectory of the HOME directory.

In the future, I plan to include more advanced features, such as:

  • Semantic highlighting
  • Syntax error reporting

I believe these features will significantly improve the experience of developers of the NOME proprietary language, as it will greatly simplify their experience in programming with NOME.

Identifying bugs and issues with the JIPCAD software

While sketching some designs, I was able to identify some issues in relation to the JIPCAD software. I discovered this whilst trying to sketch a robot using the software.

My robot sketch

Robot Sketch

JIPCAD is able to merge models together. Merging is essential as merging allows the .nom, .jipcad design file to be converted into a format compatible with 3D-printing. But my design above was not able to be merged. The reason was the inability of JIPCAD to merge two sweeps that share the same face (specifically, the robot's shoulder(green) and the robot's two arms(yellow)).

Although I have discovered the bug, unfortunately I have yet to be able to fix it.

What I learnt

During the research apprenticeship, I learnt a lot about the NOME proprietary programming language, and how to use it to sketch points, rotated shapes and more. I also obtained a more in-depth understanding of compilers, base-level programming languages like antlr4, regular expressions, using VSCode datasheets like .json files, and how to write a grammar for a new programming language.

· 3 min read
Aaron Zheng

Table of Contents:

  • Overview
  • App Demonstration Video
  • My contributions
    • Learning Page
    • Camera Page
    • Recycler Interface
    • AI model and detection
  • What I learnt

Overview

Phocabulary is an educational app built for students, by students. Using AI models, Phocabulary allows users to see and learn about physical objects on their camera screens with just a click.

  Phocabulary targets children across Hong Kong and not only teaches them vocabulary, but also makes them more aware of their surroundings. Our app is already capable of detecting 90+ objects in the environment. Phocabulary will enable accounts/log-in functionality, allowing users to interact with each other. Users will be able to play fun quizzes with other users, allowing them to retain previously learned knowledge and build friendships.

App Demonstration

My Contributions

In the Phocabulary project, I was responsible for the development of the application (i.e., making the application idea into a reality) , including all the building of the various pages of the application, features such as the learning, camera, and recycler interface, and most importantly, the AI models that were used to recognise objects within the user's camera screen.

Learning Page

Learning with Camera

Camera Learning

New vocabulary page

New Vocab

The learning page consists of a custom AlertDialogue interface that pops up when an object in the Camera is being clicked. Once the Learn More button gets clicked, the user is being taken to a new Window where they can learn the word in question, see a picture of it, and a definition. If they click the Got It button, a Gif of smiling and clapping pops out.

Camera Page:

Camera Page

This was made with the builtin Android Camera interface.

Recycler Interface:

Recycler Interface

The recycler interface was built with a dynamically controlled recycler view. The recycler views would be created one at a time, using a datasheet containing all the images and definitions beforehand.

Quiz Interface

Correct Answer:

Correct Ans

Wrong Answer:

Wrong Ans

After Clicking the Show Answer button:

Quiz Interface

The quiz interface was built with a custom view, comprising of an image on the left, and 3 buttons on the right, representing multiple choice possible answers. Children can click on the buttons to answer the question, which will change the color of the buttons.

AI Models, detection

AI Models

The AI models used were SSD-Mobilenet version 2, an open-sourced artificial intelligence model with object detection capabilities, as well as You Only Learn Once (version 5).

What I Learnt

I learnt more about how object detection models work, and how to train my own object detection models using transfer learning and opencv. Also gained a much deeper understanding of Android studio and the Android application development api, as well as how to incorporate features such as Camera, how to effectively overlay views (such as rectangular boxes to highlight detection), and how to configure and customise android views.

· One min read
Aaron Zheng

Table of Contents:

  • Overview
  • App Demonstration Video
  • What I learnt

Overview

Aquatech is a simple environmental protection app which aims to help in the effort of protecting and maintaining Hong Kong's beaches. It was made as part of a Hackathon project involving 3 people, myself included.

Aquatech serves as both a water-quality informational service and a water-quality monitering service. Our app contains initial data on 3 Hong Kong beaches' water quality, namely Repulse Bay, Ma On Shan, and Mui Wo beach, which our team measured using our sensors which capture conductivity, PH and temperature data. We also enable users to share data that they collect with others through this app.

Our app is available for download here: https://play.google.com/store/apps/details?id=comp.envirobros.administrator.aquatech

App Demonstration

What I Learnt

I learnt more about developing Android applications. I also learnt about databasing, and how to enable different users to share data with each other. Moreover, I learnt more about UI, UX, and how to make good user interfaces.

· 2 min read
Aaron Zheng

Table of Contents:

  • Overview
  • App Demonstration Video
  • What I did

Overview

Meet Zenbo,  a new robot friend for seniors, kids, and anybody else who wants to invite a real-world version of BB-8 into their own home. Zenbo, Zenbo Junior are humanoid robots (with wheels attached), made by ASUS. Zenbo Image

It has the ability to talk, move around, and act as a robot friend to kids and seniors.

Zensafety is an application made to work on Zenbo, which can help users secure their most prized possessions. Users can select out of a list of 90 objects to track, and Zensafety can keep track of the security status of each and every one of those objects using its object detection technology.

Opening Zenbo’s camera feature, users can see what zenbo’s camera is seeing, along with customised coloured rectangles labeling the positions and confidence level (which is: how sure Zenbo is of the object detection) of each detected object in the frame, updated every 200-300ms.

A security rating from 1-10 will be given to each selected object based on the amount of the selected object detected and the confidence level for each object detection. If any object is found to not be secured(no detection or less than 60% confidence), the user will be notified. Additionally, the statuses of each tracked object will be written into text documents saved locally that can be accessed by the user through Zensafety.

Zensafety utilises plenty of Zenbo’s in-built features, such as voice-control, allowing Zenbo to directly communicate with users through system-initiative dialogue and build a good interaction experience. Additionally, Zensafety demonstrates multimodalily and displays emotions, such as smiling, when first welcoming users to use Zensafety, or loving and shyness when being tapped on the head, which helps it act as a companion to the users.

Zensafety Demonstration Video

What I did

This project was independently designed and developed by me as part of a yearlong course provided by the HK Academy for Gifted Education. I was guided by Dr. Wendy Hui of Lingnan University throughout the length of the project.