I recently came across a very cool scripting language; Rivescript, which can be used in chatbots or other conversational entities. It’s a plain text keeps simple replies simple; for example:
This will add a reply so that when a human says the words “Hello bot”, the bot would respond with “Hello, human!”
With more advanced RiveScript code we can learn and repeat user variables and use more complicated trigger matching patterns:
This looks like AIML (Artificial Intelligence Markup Language), doesn’t it?
I incorporated ml5js libraries for the machine Learning portion of the code. ml5.js is machine learning for the web in your web browser. Through some clever and exciting advancements, the folks building TensorFlow.js figured out that it is possible to use the web browser’s built in graphics processing unit (GPU) to do calculations that would otherwise run very slowly using central processing unit (CPU).
PoseNet is a machine learning model that allows for Real-time Human Pose Estimation.
PoseNet can be used to estimate either a single pose or multiple poses, meaning there is a version of the algorithm that can detect only one person in an image/video and one version that can detect multiple persons in an image/video.
Unless you’ve been living on another planet, you’ve probably heard about Amazon’s Alexa. It is a pretty cool cloud-based voice service.
What can you do with Alexa?
I’ve been looking into the technology behind Alexa. At the high level, it’s simple, yet elegant.
I believe Alexa uses SSML; Speech Synthesis Markup Language, when converting Text to Speech (TTS); because she sounds very conversational; rather than a robot reading the text word by word.
Here are a few more technical diagrams of how it works at the high level:
I thought that the developing an Alexa Skill was straight forward and user friendly; especially if you have used any ML/AI tools. Navigation and setup look similar to others.
I created a few new test skills by using existing templates and added new custom intents. It was fun and I can see by a little bit of creativity, some great skills can be added to this fun smart tool!
Please watch my demo down below:
First skill is a game template where I added new custom intents.
Second skill is calling a fun external API that returns the number of astronauts currently in space and their names. 🙂http://api.open-notify.org/astros.json
Another interesting video about “Lessons Learned Growing Alexa” and a few fun capabilities/skills that the Amazon Team discusses.
Bonus point: If you’d like to do a deep dive and create Alexa Skills with serverless backend, this youtube video should help and here are some more technical diagrams from the presentation:
Yolov3 is an algorithm that uses deep convolutional neural networks to perform object detection.
In this quick project, I used the code that converts YoloV3 official already trained weights into TensorFlow models.
Once converted, I run the detect python code to use TensorFlow models to detect and identify the objects. I think it did pretty good.
As you can see in the short video below, it did detect my refrigerator, books on the shelves, bottle, myself as a person, my cell phone and my cup, a teddy bear toy and potted plants. 🙂
Very exciting and these are some of the technologies that are and will be utilized in self-driving cars.
Please also check out my Face Detection project, here: