by Devin Giannoni
Techtonic Software Developer
blog hero image

September 25, 2019 – A crew of Techtonic developers hit the road for Denver to attend the Artificial Intelligence/Machine Learning presentation by Global Machine Learning Specialist, Kris Skrinak. Designed to demonstrate the capabilities and best practices of AI/ML, the presentation demonstrated several of Amazon’s AI services and included a lab of Amazon’s SageMaker service.  

Artificial Intelligence, Machine Learning and Deep Learning are some of the most thrown around buzzwords in today’s Tech world.  It’s even been hypothesized that Forty percent of ‘AI startups’ in Europe don’t actually use AI. However, that shouldn’t discourage anyone from jumping into the deep end of the AI pool – it’s estimated there will be 58 million jobs requiring these skills by 2022. As you will see with some of the examples provided below, AI is here, and it’s here to stay.  

One of the most interesting services demoed was AWS’ Rekognition. In Amazon’s own words, Rekognition is a “Deep learning-based visual analysis service” that can “Search, verify, and organize millions of images and videos.” Rekognition is broken down into different services that can perform the following tasks: object and scene detection, image moderation, facial analysis, celebrity recognition, face comparison and reading text from an image.

Street view.png

AWS’ object and scene detection tool properly analyzing an image and identifying the objects in it.  


AWS’ image to text tool can be read a license plate with extreme accuracy.


AWS’ face comparison tool can verify identities to a remarkably accurate degree.

One of Techtonic’s software engineers was on site, Joshua “Moose” Archer, and he agreed to an interview with me after the presentation to discuss his thoughts on the future of AI/ML.

DG: What do you think the future of AI will be?

M: The amount of problems that we can solve using AI will only be limited by our imagination and hardware. I think we’re closer to general intelligence than we realize. With the advent of quantum computing around the corner, it’s bound to happen in the next 2-4 decades.

DG: What kind of problems do you think AI can solve?

M: Problems of optimization in fields that are limited to human labor. I foresee a massive influx of AI being used to produce crops and other commercial goods. 

DG: What’s the difference between Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL)?

M: Artificial Intelligence (AI) is just the concept of simulating human intelligence with machines. Basically, an AI is given data and an expected result. It then attempts to achieve the expected result, recording its own progress and modifying its constants/variables to resemble the expected result as much as possible. 

Machine learning (ML) is one of the concept’s earliest implementations. Think of DeepBlue from IBM, the chess AI that beat every world champion it faced against with ease: it was given a set of data to “train” with and given the goal of “get a checkmate” with no specific rules on how to play the game. Then, several variations of “itself” are made to play against other players. The variation that gets the highest score (i.e. the closest to a checkmate) is then used to make the next generation of AI players, but they go through an optimization function first to modify its prediction method. And this goes on until the AI is able to beat its opponent every time. 

Deep Learning (DL) is a newer implementation of AI that uses multi layered perceptrons, or convolutional neural networks, which is a class of deep neural networks. Let’s start a little simpler and consider the following: The way to represent a neuron/brain cell in code is called a perceptron. 


At its most basic level there are some inputs (X1, X2, Xn) which represent data, then there are a range of possible outputs, often called the “hidden layer” of a perceptron. This is where it maps the input to a probability. At the end, it spits out an output based on the input and hidden layer. Now, a step up from that is a multilayered perceptron: 


The difference here is that each input layer communicates to each hidden layer, so it has at least O(n!) complexity depending on the number of inputs and layers. The problem with this approach is that it leads to overfitting the data set–which means that it only got good at predicting that particular set of data instead of “learning” how to correctly predict it. So, a convolutional neural network has been “regularized” to have more optimized weights for the hidden layers. There’s much more to this, but it’s beyond the scope of this paper. If you’re interested in knowing how this works in greater detail, I would recommend checking out Coding Train’s Neural Networks and Machine Learning playlists.

DG: Isn’t AI just science fiction? 

M: I would say it depends on how you think about it. If you think of AI as being like Skynet in Terminator, then no. With the way AI works right now, it’s just lines of code. They don’t have any drives or goals other than what’s specified in the program. It’s more likely that an AI would accidently harm a human than harm one with intent. The things that AI is capable of doing right now, will probably be considered trivial in 10 years, and if you look into how they work. It’s not magic, just good code. 

Additionally, Kris had an interesting factoid: 30% of Amazon’s revenue and a whopping 70% of Netflix’s revenue is attributed to machine learning suggestions – it’s real world use is more about improving businesses as opposed to indulging in science fiction.

DG: Is AI dangerous?

M: AI isn’t skynet and it likely won’t ever be. That’s not to say that AI couldn’t be implemented in a dangerous way. I think it practically goes without saying that almost any technology in the wrong hands could be used unsafely. But would AI ever suddenly decide that humans are a threat and to create scary skeletal robots to wipe us out? Of course not. That’s our human imagination projecting onto the future; if–and it’s a strong if–some super advanced AI decided it wanted to eradicate us it would do it in a way we probably couldn’t conceive of.  All in all, AI is only as dangerous as the humans who code it. 

DG: Will AI replacing humans be a problem?

M: I don’t think AI will be literally replacing humans for the most part. Maybe replacing them in terms of jobs or services? Sure. But every time we go through a technological paradigm shift some classes or types of jobs disappear and new ones are created. I’m not saying it won’t be unpleasant for some depending on your field of work. But consider if AI made things so efficient in the realms of manual labor that it was no longer needed. You’re still going to need people who maintain the machines, do a human-level quality control check, write the code etc… Plus, some things just may not be solvable with AI. And if we were able to produce food without labor cost, it would cut costs and perhaps make food cheaper. 

DG: Could you recommend some further reading on the subject?

M: Sure here’s a list of some interesting articles, as well as some free courses and video lectures.

DG: Thanks for taking the time to answer my questions!

M: You’re welcome!

As you’ve seen, AI is already out in the wild solving real problems. Tasks only conceived of just 10 years ago, like the Rekognition service mentioned above, are considered mundane off-the-shelf products now. The discussion is no longer about “is AI ready” or “will AI be a problem”, but instead “how can we solve problems better than before with these new tools”. If a business wanted, for example, profanity filters for both text and images it wouldn’t be that difficult to amalgamate the AI solutions with what software we already have. 

Leave a Reply