“It’s ALIVE!”- A Look into Machine Learning
Machine Learning is all around us, from the touch of our fingers through auto-correct, to the integral technologies that build our world. Machines are being built smarter, and more adaptable as the years go on, and mechanical intelligence has reached an all time high. But what exactly does this all mean? What are our machines truly learning? Well, to put it simply, they’re learning about us. They are trying to align themselves with a world they only know about through our input.
Information can be collected on just about anything, and as seen in fig. a, it all starts as input. This is received through a learning model, and then sent to output. Upon getting an error, the machine enters back into the learning model to execute the correct output, ultimately learning from the experience. With this basic flow in mind, there are three main ways that Machine Learning models take place.
Supervised Learning, Unsupervised Learning, and Reinforcement Learning.
For a broad overview of the topics I will cover, this video by Simplilearn does an amazing job at breaking down the concepts into great building blocks.
In our example in fig. b, we have a simple graph of song choices generated from a music streaming service. Red stars indicate a dislike, and green stars a like. With all things in mind, it is up to the machine to guess if the black stars fall into the like or dislike category. With Supervised and Unsupervised Learning, this is done through data based learning, while Reinforcement Learning is done by reward based learning. Having data driven learning can be good for systems of statistics and quantifiable information, but in the case of something more nuanced and subjective, Reinforcement Learning might be a better choice in development. In this learning type, information is collected on the basis of checking against previous answers to achieve output. This means that, although it will need a human touch, eventually the machine should become fairly self reliant to adapt and grow on its own.
Here we see in fig. c a simplified overview of the relationships between Machine Learning styles. In this we’re able to see the subtle differences much clearer. Although Supervised and Unsupervised Learning are both based on collected data, they have a few major differences. Supervised Learning is based on labeled data given to the machine by human input. The machine then associates previously defined features to each incoming item, ultimately resulting in a collection of learning curated by the human touch. Unsupervised Learning is based in unlabeled data, so no human input. The learning is based solely on the naturally occurring patterns within the incoming data, then is interpreted based on those clusters of information. Reinforcement Learning combines both human and machine guidance in its learning, creating a totally unique learning style from machine to machine. This learning style is based on the preservation of past mistakes and the ability to evolve with accumulated decisions. With this model, the more things passed through, the higher the accuracy should become.
With all things said, the most important part to Machine Learning is error handling. The only way a machine can properly learn is by the response after the initial output error. Whether this is supplied or created, data based or error based, the information collected is truly amazing. Machine Learning has the potential to completely restructure the world as we know it, streamlining our lives for a more efficient experience.
Now that we have the broad strokes of Machine Learning, let’s tie it to some examples. In fig. b we talked about learning models in reference to likes and dislikes. The same idea can be applied to more serious issues, like maintaining a company’s Terms of Service. In this idea, a user uploads content that in someway violates their User Agreement. Machine Learning could be implemented to filter through the content, trigger a violation, and then remove the content in question. On the case that there was a mistake, a human revision would be implemented and the machine would be notified. This process allows for a tremendous amount of insight for the given learning model and ultimately leads to better results. Long story short, humans and machines both learn the most from the ways they handle their mistakes.
Simple ways that can be seen in the real world is within YouTube’s implemented model of Reinforcement Learning on their platform. Initially creators and viewers alike found themselves becoming overwhelmed with the decisions their system was making. Over time there has been a noticeable difference in accuracy, even in the case of incredibly nuanced issues. Of course nothing is perfect, but this is a wonderful example of how over time, things really can evolve. Seen in fig. d, the CEO of YouTube Susan Wojcicki addresses several topics, including this concept and the future for the technology behind the platform.
While some companies faced only controversy, nothing is worse than the risk on human life through Machine Learning. One way this is immediately seen is in the total automation of self-driving cars. In the article below from Forbes, just that is discussed.
What Happens When Self-Driving Cars Kill People?
In recent years autonomous vehicles have moved from a fanciful science fiction topic to actual reality, with real cars…
Seen in the fig. e graphic provided by Cognilytica, self-driving cars are based on a 0 through 5 level system, zero being almost every car on the road. A more high end car with one feature, such as self-parking, would be a level 1 or 2 depending on the specifications. The AutoPilot feature from Tesla is rated at a level 3 and offers total driving with a human “at the ready!” The car mentioned in the Forbes article sits around a level 4 or 5, where currently very few cars rank in that level. Even in the aforementioned case of the Uber self-driving car test run, there was a driver at the ready. With this technology already in use, everything seemed in their favor, but tragically in March 2018, the vehicle struck and killed a pedestrian. Although this is the absolute worst case scenario, it leaves us thinking just how could have this been avoided? Were there any missed edge cases in the software?
Preventing that worst case scenario needs to be at the forefront of every programmers mind. In the above lecture, this is discussed, along with the importance of edge learning. This concept allows the machine to make even more important decisions with more accuracy and precision, hopefully allowing for a faster evolving world of Machine Learning. Basically in simpler terms illustrated through fig. g, although the general knowledge for the entire platform is on a more removed cloud, the information for more individual experiences needs to remain locally within the machine. If the machine doesn’t have close enough access to those past reactions, the efficacy of the previous learning is compromised. With the rise of Machine Learning in various fields, it is imperative to set these precedents in the structures of how this technology will be implemented before it’s even more integrated into our society.
On a much more positive note, Machine Learning has completely revolutionized healthcare in a groundbreaking way. Having the capability to input completely unstructured data and synthesizing it into organized packages is revolutionary. Several of these amazing applications are discussed in the article below by IBM.
Artificial Intelligence in Medicine | Machine Learning
Artificial Intelligence (AI) can identify relationships in raw data, used to support diagnosing, treating, & predicting…
“AI is a tool. The choice about how it gets deployed is ours.” -Oren Etzioni
Although Artificial Intelligence has origins in medicine as far back as 1972, it wasn’t truly utilized as an option in healthcare until the mid 2000’s. Here in fig. g, we learn about IBM Watson, an incredible piece of technology being utilized in Oncology. In such a difficult branch of medicine, having Artificial Intelligence to assist in the care of patients could really be the difference between life and death. Here, we see Watson implemented in a variety of ways, from genetic factors, to medical contraindications, the technology is able to interpret a myriad of completely unorganized data and assist in patient care. With technological advancements in medicine happening rapidly, it is vital to keep healthcare professionals equipped to deal with all of the new information arising. By nature, the human mind has finite limits of learning day to day, even less on the sleeplessness that a medical professional can receive. In order to mitigate any chance for preventable human error, it would be naive not to utilize this incredible technology. Although Machine Learning and Artificial Intelligence can have a learning curve, the sooner the learning model is implemented, the better the technology can become for our future.
AI In 2019 According To Recent Surveys And Analysts' Predictions
Artificial Intelligence (AI) is the talk of the world and it features prominently in predictions for 2019 (see here and…
Here in the above article, we can see in the 2019 statistics for use of Artificial Intelligence that although machines aren’t being used to their full capacity, the technology is certainly on the rise in a number of ways. From quite literally every end of the world, Machine Learning could be implemented in the next couple decades. All things considered, it will be fascinating to see where these technological advancements can lead us. With the chance for machines to properly learn from all of the input we have to offer, we are truly in an age where humans are growing right alongside machines.
— — — Written by Kathleen McKiernan for Holberton New Haven — — —