From the team that presented AWS re:Invent 2018 session WPS304 “Rapid Prototyping for Today’s Mission”.
Problem
It’s imperative for warfighters to have actionable, up-to-date knowledge about their adversaries. This requires having the ability to quickly process and analyze new data when and where it is discovered. The current procedure for gathering, processing, and analyzing data can take weeks or months due to logistics, transport, and data handling complications.
Using the AWS Snowball Edge, this entire process can be automated, delivering insights from data collected in the field within minutes or hours.
AWS Snowball Edge
The Snowball Edge is a portable compute device that is capable of ingesting and processing up to 100TB of data at the edge. With AWS capabilities pre-installed, users can develop automated data processing pipelines on the AWS cloud, then deploy these pipelines to run locally on the Snowball Edge. Using the Snowball Edge, we are able to locally develop immediate insights.
Machine Learning Use Cases
The unique capabilities of the Snowball Edge enable us to deploy state-of-the-art models into the field so that warfighters can more quickly get actionable intelligence. We followed a rapid prototyping process to develop both an image classification model and a text classification model. In both of these instances, we made use of transfer learning, which allows us to train models with fewer training examples. Transfer learning is a way of creating different machine learning models by modifying a base model originally created for a different use case. A base model trained on large amounts of data (e.g., millions of images or documents) can be fine-tuned with smaller amounts of data (scale of hundreds) for a desired behavior. The diagram below presents a simplified idea of how transfer learning could be employed.
Image Classification
Since 2012, transfer learning has been an integral part of state-of-the-art image classification models. In the case of transfer learning for images, a base model is trained on a large dataset with many classes, such as ImageNet. At a high level, the early layers of the model can be thought of as learning to identify building blocks, such as basic shapes and colors. In the later layers, the model learns how to put these building blocks together to learn to distinguish between the different classes.
When a model needs to be trained for a related task, such as classifying images not in the original training set, the base model is fine-tuned to recognize specific objects constructed from the basic building blocks. The base model provides a starting point, having learned relevant information that allows for a new model to be constructed with many fewer training examples in much less time.
On the Snowball Edge, we have utilized transfer learning to build models that can identify images with banners, flags, and emblems of interest. Using a dataset of just 100 positive examples and 100 negative examples we quickly created an image classification model that accurately identified images of interest.
Text Classification
While transfer learning has been widely applied to image tasks, it had not been widely explored for text classification until 2018. In May of 2018, Jeremy Howard and Sebastian Ruder released Universal Language Model Fine Tuning for Text Classification (ULMFiT). ULMFiT demonstrated that applying transfer learning to text classification led to an 18-24% improvement on 6 different datasets. ULMFiT starts with a universal language model pretrained on Wikipedia-103, in effect developing a general understanding of English. We fine-tuned this language model on the target data, which is then used to train a propaganda classifier, as shown in the figure below.
Using just a few hundred labeled quotes, our classifier was able to achieve 95% accuracy, which is consistent with the state-of-the-art solution for two-class classification problems. Once trained, this model can be deployed to the Snowball Edge to quickly assess large numbers of documents and tag those that are likely to be of interest for users.
Deploying Models to Snowball Edge
Deploying a machine learning model to the Snowball Edge is similar to deploying models to AWS. We used four AWS services on the Snowball Edge to deploy models:
- Greengrass: software that provides IoT capabilities
- S3: scalable data storage
- Lambda: event-driven computing service
- EC2: scalable cloud-computing platform
Since the Snowball Edge is capable of managing IoT sensors to trigger AWS services, it can run a pipeline without human intervention. A diagram of our machine learning pipeline inside of the Snowball Edge is shown below.
While the pipeline is fairly straightforward, it is powerful and flexible, capable of addressing many edge use cases:
- Images or text documents are uploaded to an S3 bucket.
- IoT triggers notify Greengrass that a new file was uploaded to S3.
- Greengrass invokes a Lambda function to process the data.
- The Lambda function checks the type of input (image or text) and sends it to the correct machine learning model on a Flask server running on an EC2 instance.
- The model processes the input file and sends predictions back to the Lambda function.
- The Lambda function adds a message to a Greengrass queue so that the data can be sent to the graph visualization program.
- Another Lambda function manages the message queue and sends messages to a Gremlin graph database used to display the output data.
- Analysts can view alerts or further explore the prioritized raw data.
Due to the directness of the pipeline and the processing power of the Snowball Edge, we can handle gigabytes of data in a matter of minutes and provide near-immediate insights to users at the edge.
Solution
Using the Snowball Edge, the entire process for gaining insights from data can be automated to dramatically decrease the time required to obtain insights. Knowledge that would normally take weeks or months to acquire can now be delivered within minutes or hours. With the Snowball Edge, we can bypass the time-consuming task of transporting the data to a remote location for processing.
Conclusion and Next Steps
In a matter of weeks, we went from brainstorming to having working prototypes on the Snowball Edge. To quickly develop these prototypes we relied on ULMFiT and transfer learning, both of which help lower the barrier to developing state-of-the-art models. While we developed models for two specific use cases, this approach can be just as easily applied to other image and text classification use cases.
As we work with customers to identify their priorities, we will continue to push the boundaries of what is possible at the edge. In addition to the prototypes discussed above, we plan to develop pipelines leveraging other algorithms. We will also look to update the models at the edge as new information is discovered. Most importantly, we will incorporate feedback from users in the field to ensure that we keep operational demands at the forefront of our development efforts.
Related blog posts
- NLP Transfer Learning on SageMaker: https://www.novetta.com/2018/10/nlp-transfer-learning-sagemaker/
- Named Entity Recognition and Graph Visualization: https://www.novetta.com/2018/09/named-entity-recognition-and-graph-visualization/