Its that time of the year again - AWS Summit London
Its hard to believe that 12 months ago I was compiling my thoughts from the previous 2018 AWS Summit, where has the year gone it only feels like 6 months ago. Last year AWS ran the summit over 2 days, this year however there was only 1 day.
Anyhow, a year later would the event meet and exceed my expectations? Lets find out.
Booking and travel
As per previous years AWS run the Summit events not only in the UK but across the globe, the UK (London) event typically happens at the same time each year - May. Booking onto the event usually opens up 4 weeks prior to the event so it gives you enough time to find and book accommodation/travel at sensible prices.
This year I stayed at The Good Hotel which is essentially a floating hotel in the Royal Victoria Docks. The hotel is ideally situated near the ExCEL arena which allowed me an extra 30 mins in bed before heading down to register (06:30 is a lie in these days !).
Registration on the day
A couple of weeks prior to the event attendees are sent out their personal QR code/ passcode which is used on the day to register. Armed with my QR code I headed to the registration hall which compared to the previous year appeared a smaller set-up.
I registered early (08:15) as last year the queue waiting time was around 45mins and I didn't want to stand around carrying my luggage. Getting there early also gave me a chance to visit the partners and suppliers without the hassle of competing with 'swagger grabbers'.
The first session of the day was the keynote, this was hosted in the main auditorium and lasted a couple of hours. The key takeaway for me was the Sainsbury's case study, it was interesting to hear how they within 2 years have moved 80% of their services to AWS and to learn about the benefits realised. This was particularly relevant for one of current clients who are looking to adopt a similar microservice architecture.
What would you do with a million cores? High-performance computing on AWS
Speaker: Frank Munz, Technical Evangelist, AWS
Originally I had planned to attend a machine learning session however due to the time taken to get out of the keynote and popularity of the session I couldn't get into the theatre so I joined the session hosted by Frank Munz.
I attended the last half of the session however it was interesting to learn how the human genome was processed in a matter of hours using due to the use of high performance computing resources.
The session also covered the different processor types and the related use cases including how in the future we can use the low powered ARM based processor architectures to complete tasks.
This was the first session I had attended which included the noise cancelling headphones which worked really well, hopefully these will make a return to following events.
Building Modern APIs with GraphQL
Speaker: Robert Zhu, Principal Technical Evangelist, AWS
With the increasing demand for organisations to become more 'agile' (a term which is overused in my opinion) we now also see a rise in microservice architectures to support this new operating model.
I was particularly interested in attending this session as i'm presently working with clients who are adopting a microservice architecture so hopefully there were lessons learned I could take away.
Robert was an engaging presenter with experience from Microsoft and Facebook specifically working on the GraphQL project as one of the contributing developers. He went through some key comparisons of REST vs' GraphQL including a live demonstration using Starwars data.
Prior to the session I had a general understanding of the technology but now its definitely something I'm going to investigate further then potentially consider for future API projects.
Build, train, and deploy machine learning models at scale using AWS
Speaker: Julien Simon, Principal Evangelist, ML/AI, AWS
The session started off by introducing a customer success story, this time it was British Airways (BA). We lean't how BA use machine learning to track, monitor and maintain their aircraft fleet. I had assumed that in this new digital age with everything connected that all the data was now real-time, I was surprised to see that BA still consume and process simple binary based I/O data as part of their machine learning models.
Julien also touched on the AWS Sagemaker service but the majority of the session was covering the BA case study, very interesting nonetheless.
Once the BA study was over I made an early dart to load up on coffee and sugary snacks as I was attending a machine learning workshop next.
Workshop: Using AI/ML to Personalize your Recommendations
Speaker: Dr Andrew Kane, Principal Solutions Architect, AWS
Refueled and ready to go I headed up to the workshop which was located in one of the large meeting rooms on the 3rd floor. A welcome break from the crowds on the 1st floor.
The workshop was a great way of breaking up the day and getting hands on with one of the new products which is yet to be in public release, Amazon Personalize. I was interesting in getting a first look at this new service and learning from some of the AWS machine learning experts how to write a personlisation service. We were going to build a movie recommendation website which provided personalised suggestions.
The session was run by Dr Andrew Kane and supported by a few members from the machine learning team at AWS who helped candidates with queries related to the code or infrastructure set-up.
We were provided with a github resource and a comprehensive guide (included in the readme.md) which provided everything we needed from the Cloud Formation template through to S3 bucket policy details.
Once I had the provisioned the necessary resources it was time to fire up the Juypter Notebook on my Mac and work through the Github steps. The first stage was to upload the training data for the machine learning model, we used the Movie Lens data set in this example which created the 3 following models:
- Using a USERS file, create a model that takes into account user's demographic details such as age, gender and occupation
- Using an ITEMS metadata file, create a model that also takes into account the movie year and the top-4 genres associated with that movie as 4 separate metadata fields
- Using an ITEMS metadata file, create a model that also takes into account the movie year and then compounds the top-4 genres into a single metadata field
It took about 30minutes for the training of the models to complete, this was performed on an m4.xlarge EC2 which I think was quite impressive given the size of the data set.
Whilst I was waiting for the training to complete I moved onto building the website frontend, this would be where the personalised products will be returned back to the users. The frontend was built again using Python with the Django framework.
In summary it was an excellent session and useful to get hands-on building with one of the machine learning services. Once the service is made available for public use I expect there will be large demand for this especially for customers in the e-commerce space.
The summit as per last year was a great day and there was plenty of learnings I took away. I particularly enjoyed the GraphQL session learning how APIs can be designed in a more efficient way, definitely something I will be pursuing.
I also took a lot away from the workshop, it was good experience getting hands on building a fully functional product recommendation service. This is relevant to a number of our existing clients so hopefully we can evaluate this further.
Hopefully next year AWS will consider running the Summit over 2 days though as I do think it was too busy and there wasn't enough time to attend all the sessions whilst the workshops were running at the same time.