Connecting Clouds using strongSwan

strongswan.png

Summary

A universal need with Infrastructure as a Service (IaaS) is to securely connect isolated networks, whether they be public or private cloud, on-premise, etc. Some of this need can now be met using methods such as Amazon VPC peering, etc. however, even this has technical constraints preventing a blanket use (e.g. when using in conjunction with Direct Connect, etc.). Therefore, an inexpensive generic network interconnect design pattern was sought to address this typical use-case.

strongSwan, launch in 2005, is an OpenSource IPsec implementation that was originally based on the discontinued FreeS/WAN project. strongSwan can be quickly provisioned onto a virtual machine (VM) which then connects to connect an Amazon VPC network to via a standard Amazon VGW to another network, whether that be any public or private cloud, on-premise data centre, etc.

The following diagram illustrates a simple implementation connecting networks A and B; where Network B is an AWS network (VPC), and Network A can be a network of any type at any location with Internet access and an instance/VM available to run a VPN gateway. From here on in, this diagram will be referenced to simplify key concepts.

Connect a VPN using strongSwan

Prerequisites

General

It is essential to avoid overlapping addresses for the networks you are joining as this will lead to routing complications.

Network A

strongSwan VM

StrongSwan can be installed on Linux 2.6, 3.x and 4.x kernels, Android, FreeBSD, OS X and Windows. However, we have found the optimal platform to be a Linux Ubuntu 14 VM. This VM, which can run on a modest 1CPU + 1GB configuration (additional resource will be needed depending on load), will need and internal and external interface.

Perimeter Security

Some firewall should be deployed to protect Network A, at the very least this must have IP filtering capabilities.

Network B

VGW

The following instructions should be used to provision the Amazon VGW:

  1. Create a new Customer Gateway in the VPC (Network B) with IP address set to the external IP address of the StrongSwan VM (e.g. 1.1.1.1)
  2. Create a Virtual Private Gateway in the VPC (Network B)
  3. Create a new VPN Connection in the VPC (Network B) using the above Customer Gateway and Virtual Private Gateway. Set routing to static, and add a static route to Network A (172.16.0.0/16)
  4. Once provisioned, download the configuration file for the VPN Connection using the generic/generic/vendor agnostic pattern
  5. Open the downloaded file in Excel for clarity
  6. In the file you will see two sections; one called IPSec Tunnel #1 and the other called IPSec Tunnel #2. Both sections contain the following information that is needed for later configuration:
  • Pre-Shared Key (a random string of characters used for authentication)
  • Virtual Private Gateway (the external IP address of the Amazon VGW tunnel – 2.2.2.2 and 3.3.3.3 in the above example).

Route Table

A route table will need creating, with a route to Network A (172.16.0.0/16) targetted to the VGW. This route table should then be associated with all subnets needing to communicate with Network A.

Security Group

A security group will need to be provisioning to permit inbound traffic into Network B. This security group should be associated with any resources in Network B that need to be accessed by resources on Network A.

Installation

To install just run the following command:

  • apt-get install strongswan

Security

Firstly the Network A firewall should be opened to permit incoming traffic from the VGW tunnels on 2.2.2.2 and 3.3.3.3 on only UDP 500, UDP 4500 and protocol 50. Additionally, AWS security groups should be carefully managed to protect incoming traffic to Network B.

Finally, the VGW Pre-Shared Keys needs to be bound to the StrongSwan service so tunnel(s) can be established, these are maintained in /etc/ipsec.secrets.

/etc/ipsec.secrets

# This file holds shared secrets or RSA private keys for authentication.
# RSA private key for this host, authenticating it to any other host
# which knows the public part. Suitable public keys, for ipsec.conf, DNS,
# or configuration of other implementations, can be extracted conveniently
# with “ipsec showhostkey”.
2.2.2.2 : PSK “????????????????????”

NOTE: it is vital to retain the quotes either side of the pre-shared key.

Configuration

Once installed, the service maintains its configuration at /etc/ipsec.conf. Simply, paste the following, AWS-optimised configuration into the file making changes highlighted in BOLD to fit with your particular environment. Each connection (conn) is given a name, in our example aws_tunnel_1, aws_tunnel_2, these can be changed to reflect your specific requirement.

/etc/ipsec.conf

 

Once the configuration files are ready, you can start the service run running service strongswan start.

The service should already be enabled for startup on boot through an upstart job for strongSwan located at /etc/init/strongswan.conf.

Multiple Tunnels

Should you need to connect multiple AWS networks using a single strongSwan VM, this can be easily achieved by the following:

  1. Adding connections (four lines per conn) to the end of the /etc/ipsec.conf file – ensuring the connection name is unique
  2. Adding a new PSK lines to the end of the /etc/ipsec.secrets file
  3. Opening up the Network A firewall to permit IPsec traffic to the new network
  4. Restarting the stringswan service.

/etc/ipsec.secrets

3.3.3.3 : PSK “????????????????????”

/etc/ipsec.conf

conn aws_tunnel_2
right=3.3.3.3
rightid=3.3.3.3
rightsubnet=192.168.0.0/16

Testing

NOTE: it is not possible to the use the strongSwan VM as the Network A test resource as this is outside the encryption domain so will not be included.

Tunnel

Check whether the tunnel is up by either:

  • From the AWS console look in the tunnel details on the VPN connections, this will list the tunnels as either up or down
  • From the strongSwan VM, run the ipsec status command, this will list the tunnels as either established (up) or installed (down).

If the tunnel is not up:

  • Ensure the StrongSwan service is started by running the service strongswan status command
  • Ensure the configuration is correct, and that the Pre-Shared Keys in the /etc/ipsec.secrets file are surrounded by the correct quotes
  • Ensure the Network A firewall is open to the VGW tunnel addresses.

Routing

Check hosts on Network A can route to Network B and visa-versa:

  • From VM A run traceroute (Linux) or tracert -d (Windows) to the internal IP address of VM B, then repeat in the other direction.

The first hop should be the internal IP address of the strong Swan VM. If it isn’t then:

  • Check the IP addresses used in the test
  • Check the StrongSwan configuration file is correct
  • Check the Network B route table
  • Check the Network B VPN Connection static route.

Connectivity

Check hosts on Network A can ping to Network B and visa-versa:

  • From VM A run ping to the internal IP address of VM B, then repeat in the other direction.

A ping reply should be returned. If it is not then:

  • Check the IP addresses used in the test
  • Check all test computers are running
  • Check the security group (Network B).

Troubleshooting

The following commands have been listed to assist with any troubleshooting.

  • ipsec status. Queries the tunnels
  • ipsec <connection name>  up.  Brings up a specific tunnel (called <connection name>)
  • ipsec <connection name> down. Tears down a specific tunnel (called <connection name>)
  • service strongswan status. Queries the strongSwan service
  • service strongswan stop. Stops the strongSwan service
  • service strongswan start. Starts the strongSwan service.

I would like to take the opportunity to thank Ra’ed Hussein, Cloud Support Engineer at Amazon Web Services who helped me iron out some of the wrinkles with the implementation of this solution!

Advertisements
Tagged with:
Posted in AWS, Azure, Cloud, Environment, General, Security

Data Security with AWS (01/12/2015)

I attended this event  at 60 Holborn Viaduct, London, EC1A 2FD.

Notes

  • All AWS staff are obliged to go thru security awareness training, failing to do so results in escalation to their manager
  • They presented an interesting view of Customer Responsibility view (page 17 of securityupdate-151202085009-lva1-app6892.pdf) based on traditional IT, this may be a useful tool to compare against the AWS shared responsibility model (which they were keen to hammer home!)
  • They follow best practise internally and anything which requires manual intervention is automated to prevent human error, increase quality and decrease costs
  • EC2 alarms now have provision for auto-recovery which is extremely appropriate for NAT instances
  • VPC Endpoints will soon be expanded to support more services beyond the existing S3 service
  • It is possible to restrict the use of VPC Endpoints through the use of security groups and IAM policies, i.e. so that there are different security profiles for accessing different buckets in a VPC.
  • You can also lock down an S3 bucket so that you can only access it from an endpoint, making it an internal-only bucket. They recommended for any new AWS account, we delete the root key and secret key, create other admins, and then lock the root account password in a safe with an MFA and forget it. This is because you can never apply security policies to the root account. They were recommending resource-based access policies which are becoming more common on top of the other types, and using AWS Managed policies where possible. All AWS APIs are secured with TLS, and will soon be migrating to s2s soon (more on this later…)
  • Amazon inspector is in preview but currently limited to the Oregon region for Amazon and Ubuntu Linux
  • Their WAF offering has integration with the Imperva solution.
  • AWS Config can use Lambda functions for automated remediation, i.e. an email can be sent on discovery of any untagged resources, lots more, etc.
  • They recommend we follow their Security Blog
  • CloudFront can be used to support data sovereignty through the use of Geo-IP blocking
  • You can use feed CloudTrail logs into CloudWatch logs to help monitor and alert on AWS usage and changes
  • By default, CloudTrail logs are encrypted
  • EC2 instances are allocated bandwidth depending on their size this is a design consideration for NAT instances with a heavy workload
  • Flow Logs can be enabled at the VPC, Subnet or ENI-level
  • You can use feed Flow logs into CloudWatch logs to help monitor and alert on network traffic, i.e. more than 10 SSL rejects in 5 minutes, etc. this could then trigger a Lambda, function to update a security group /NACL to block the IP address in question, etc.
  • AWS are moving to implementing s2n throughout, s2n is its open source TLS1.2 implementation optimised for AWS
  • They are refreshing VGW; recently adding NAT traversal support, upgrade to AES256, more to be announced soon…
  • EBS root volumes can only be encrypted using CloudHSM (which comprises of physical appliance(s))
  • If you replicate encrypted S3 data to another region then it will need to be specifically encrypted there too
  • Following the DevOps movement, AWS have coining the DevSecOps movemen…essentially the same with security at the heart of all tools and techniques!
  • Interestingly there was a 50/50 split in the room with those already working within a DevOps framework and those using traditional segregated dev and ops teams!!
  • They recommended the use of CloudFormation as a control point, the idea that a template could be delegated into portion relevant to different roles or teams through the use of YAML

Media

Slides from the day…

Posted in AWS, Security

AWS Pop-up Loft London (19/04/2016)

General

  • The AWS Pop-up Loft | London was hosted at the Brew Eagle House 163 City Rd, London EC1V 1NR
  • Here, the venue consisted of 2 rooms located behind a shared café; one for hot-desking and the other for the training sessions
  • The hot-desking room contained around 40 desks and booths with power and free refreshments
  • The training room had formal seating for around 100 people plus sofas on the side for a dozen or so, again with free refreshments
  • The event provided free Wi-Fi which represented typical ADSL performance so adequate at best
  • The dress code was casual given the developer-heavy attendance.

Sessions

IAM Best Practices | Presenter: Matt Maddox (Manager Solutions Architect @ Amazon)

  • Top 11 best practices for IAM
    • Create individual users
    • Grant least privilege
    • Manage with groups
    • Restrict privilege further with conditions
    • Enable CloudTrail to get logs for API calls
    • Configure a strong password policy
    • Rotate security credentials
    • Use roles to share access (delegation)
    • Use roles for EC2 instances
    • Reduce use of root account.
  • Use Cases
    • Inline policies are best for 1-2-1 relationships with a given resource, whereas managed policies are best for reusable policies and where versioning and roll-back are of value
    • Use groups for logically managing IAM users, and managed policies for assigning common policies to users, roles, groups, and users.

Security Incident Response and Forensics on AWS | Presenter: Dave Walker (Specialist Solutions Architect @ Amazon)

  • Be prepared!
  • Be contactable by AWS; verify contact detail on accounts – ideally routing to several people/a team
  • Produce an incident response run book
  • Ensure team contact details are visible and correct
  • Train the team
  • CloudFormation everything!
  • Backup all IP; CloudFormation templates, AMIs, DB backups, etc. Ideally, store these in another AWS account
  • Log everything. For anything not exposing logging, i.e., new services, etc. consider fronting it using API Gateway to obtain logging information
  • Enable AWS Detailed billing
  • Use Center for Internet Security (CIS) Security Benchmark as a standard for OS security. Includes commands to self-certify.
  • Consider SIRS (Security Incident Response Simulations) – essentially game day’s where AWS work with you to inject simulated attacks into AWS service to help you assess your response.
  • Consider alerting on blocked traffic in VPC Flow Logs as a low-cost IDS service
    If in doubt raise early! AWS are the best people to assist in any security incident and would sooner be involved from the beginning so raise a ticket and make it known it is in response to a security incident (they will likely call you in with 34 seconds!!).
  • Review CIS AWS Foundations.

Finding the signal in the noise: effective SecOps with Splunk Cloud

Andrew Morris (SaaS Product Manger @ Splunk)

  • Splunk makes machine data accessible and usable to everyone
  • Any data, any source, any question
  • Splunk is the glue around all data!
  • Splunk’s largest customer is processing 1.7 PB of data daily
  • Gatwick uses Splunk; to manage the entire airport, from baggage handling to traffic control, logistics, etc.
  • There are over 1,000 apps available for the Splunk platform (most are free) – think of these applications as domain intelligence for a given data source
  • Splunk offers data visualisation, data analytics, machine learning, as well as correlation of alerts
  • Baseline the norm and alert on deviation from this
  • Splunk Live in on 11th May (customer share experience/case studies also a training day)

Presenter: Ross McKerchar (Global IT Security Manager @ Sophos)

  • Case study
    • Brutal prioritisation; accept that you have limited capacity and focus on what is really important (may just focus on the critical assets only, etc.) – use CIS Critical Security Controls to assist this.
    • Focus on the achievable; the key here is only focusing on what is preventable (e.g., an SQL injection attack can be too quick to stop whereas a phishing attack may take weeks to unfold so may be more preventable)
    • Use a 4 step approach to building the solution (1. Log gathering, 2. Threat detection, 3. Governance (Config Management), 4. Security automation)
    • Mini cats can be used to compromise Kerberos (therefore in their experienced pen testers are hot on Active Directory – lock down admin accounts!!)
    • Security automation is the goal(i.e., blacklisting IPs on the fly, multiple attacks add to watch list and investigate IPs, etc.) as security resources are expensive and rare!
    • Not able to discuss their use of Splunk for monitoring AWS due to some random reason regarding their special sauce (IP)??!
    • Consume Windows security logs, including new device/application installation events – these can make a big impact
    • It is important to hook into a CMDB and make groupings of critical servers
    • Most useful for governance (writing reports from dashboards) than threat detection or automation for Sophos
    • Correlation needs tuning – you are never finished
    • Net effect of Splunk means fewer security engineers and more business analyst
  • Demo
    • Platform with search engine built on top
      Incredible power; new search across all logs based on time frame and search words like err* or ip=x.x.x.x
    • Normalisation of data, e.g., users are the same regardless of the data source (AD, Linux, HR database, etc.)
    • Many maths functions for building complex search results
    • Build baseboard from above queries
    • AWS
      • Resource count dashboard
      • CloudTrail/config
      • Volumes in use/capacity
      • IAM & VPC errors
      • Topology maps across multiple accounts
      • Easy to setup ten mins.

Media

Slide decks from the day:

Summary

The day was interesting, and it was fun to hot-desk somewhere new, I picked up new knowledge and actionable improvements for the business. However, I was hoping for a more intimate and interactive day with more AWS and innovation buzz. Also, with travel, it was a long and intense day!!

Posted in AWS

AWS Summit London (07/07/2016)

 

This year’s event, like last year, took place at the ExCeL in London

Welcome

  • Gavin Jackson, Managing Director of AWS for UK and Ireland, welcomed the audience.
  • He mentioned that the day was the largest AWS summit in Europe to date.
  • He reassured his customers that Amazon was committed to a post-Brexit economy with the continuation of it an investment in a UK region, which should be online late 2016/early 2017. The message was very; much keep calm and carry on!

Keynote

  • Werner Vogels, CTO of Amazon.com presented the Keynote.
  • The message was clearly on AWS success and growth, with a reflection over the last decade.
  • In the last ten years they have achieved the following:
    • $10B run rate
    • 64% YoY growth
    • 1M+ active customers
  • AWS released over 700 updates last year and had already released over 400 to May this year so are due to surpass last year’s rate!
  • Werner highlighted the five pillars of design, development, and operations:
    • Security
    • Reliability
    • Scalability
    • Predictable performance
    • Cost control
  • Tom Blomfield, CEO of Mondo highlighted the opportunities available now the Financial Conduct Authority (FCA) have permitted the use cloud services for financial services organisations.

Big Data Architectural Patterns and Best Practices on AWS

  • Data sympathy; the point was made that the right tool should be sourced from the data.
  • It is best practice to decouple, storage and compute for scalability and cost efficiency.

Getting Started with AWS Lambda and the Serverless Cloud

  • Dean Bryen, Solutions Architect at AWS, provided an introduction into the Lambda and API gateway services
  • He shared his serverless compute manifesto:
    • Functions are the unit of deployment and scaling
    • No machines, VMs, or containers visible in the programming model
    • Permanent storage lives elsewhere
    • Scales per request. Users cannot over- or under- provision capacity
    • Never pay for idle (no cold servers/containers can run anywhere)
    • BYOB (Bring Your Own Code)
    • Metrics and logging are a universal right.
  • Common use-cases for Lambda include the following:
    • Data processing (including image and video transcoding)
    • Log analysis
    • Data enrichment
    • Abstracting business logic from the front-end (web or mobile)
  • Should manage state in JavaScript on the front-end under as no cookies are available to use.
  • It was confirmed that Linux containers are used under the hood to provide the service.
  • VPC integration is a recently released feature, enabling a Lambda function to seamlessly interact with AWS resources hosted in a VPC, for example, EC2, RDS, etc.
    VPC integration supports S3 endpoints and VPC peering. However, will not have access to the Internet unless you have an existing NAT instance or NAT gateway.
  • The following Lambda VPC best practices were identified:
    • VPC is optional – don’t turn it on unless you need it. Otherwise, the function will be launched into an Amazon-manged VPC.
    • The ENIs used by Lambda feature count against your quota so ensure you have enough to match your peak concurrency levels and do not delete or rename these!
    • Ensure your subnets have enough IPs for those ENIs
    • Specify at least one subnet in each AZ otherwise, Lambda will obey, but cannot be fault-tolerant
  • James Hall of a digital agency in the UK called Parallax then presented his experience of delivering innovative solutions using Lambda. This was the same talk he presented at the last AWS UK Meetup – see notes here.
  • He advised that deployment and version control are the key to success.
    He also mentioned that he was staggered by how well his serverless solutions performed at scale; they stress tested using multiple solutions and noted that the solution actually speeds up under load, presumably as the containers are already pre-warmed.

Media

The day’s slides can be located here.

Posted in AWS

UK User Group Meetup #20 (25/05/2016)

Venue

Skills Matter, CodeNode, 10 South Place, London EC2M 7EB (a 5-minute walk from London Liverpool Street railway station).

A 23,000 sq ft tech events and community venue. CodeNode provides fantastic meetup, conference, training and collaboration spaces with unrivalled technology capabilities for our tech, digital and developer communities.

I found the venue to be an excellent venue for this type of event, with a large conference room with second-to-none A/V facilities, and a bar/networking area on the lower ground floor.

News

Registration opened for the AWS Summit in London which will be unveiled later today. Registration can be accessed at http://aws.amazon.com/summits/london/.

There was an offer to purchase the following AWS book at 25% off the RRP by using the code awsug. http://www.manning.com/books/amazon-web-services-in-action.

The RSVP for next month’s event was opened at the end of the night – so book now if interested!!

Agenda

Introduction

Firstly there was a welcome and introduction by the user groups organiser, Ijaz Jabbar, who introduced the sponsors.

Session 1

Session 1 was Transport for London’s Open Data Journey by Rikesh Shah, Lead Digital Relationship Manager, TfL and Tom Garratt, Technical Architect, TfL.

  • Rikesh and Tom provided an overview of Transport for London’s (TfL’s) open data journey which has resulted in the creation of nearly 500 travel apps powered by TfL data. They also covered how moving to a cloud-based infrastructure has resulted in a wide range of benefits including greater agility and being able to satisfy large volumes of real-time data.
  • They discussed how important open data was to them and that the key to their success was exposing transports data to the developer community, currently standing at around 9000-strong, which then drives innovation at a far greater [ace than they could manage internally.
    A common theme is that most traffic originates from mobile devices, not desktops.
    Their website uses the same APIs as their open data platform.
  • They are now building sensors into any new transport infrastructure to enhance the data moving forward. And they are moving towards a unified API.
  • Most of their code is written in C# and running on IIS servers fronted by Varnish caching servers – it is thought CloudFront was not available when they began implementing their open data platform.

Session 2

Session 2 was “Building a [micro]services platform on AWS by Shaun Pearce, VP Engineering, Gousto.

  • Shaun discussed how Gousto migrated from a single, monolithic PHP web application. It’s a journey many are on or will soon be starting, and they’d like to share some of the lessons they’ve learnt along the way. What worked, what didn’t and what do we wish we’d known from the start…”
  • Shaun is VP Engineering at Gousto, helping them to deliver a modern e-commerce platform utilising microservices on the AWS platform. BeforethatShaun was a Solutions Architect at Amazon Web Services. At AWS Shaun worked within the UK retail sector helping retailers of all sizes to either build for the cloud or migrate legacy systems onto AWS.
  • These guys were using CloudFormation to deploy distinct snippets at the network, security, and content level.
  • They used a combination of Ginger2(?) + Pyphon to create more concise CloudFormation templates.
  • They are using Ansible for configuration management and calling the above templates to build out the AWS stack as they believe CloudFormation is the quickest to market to support new AWS functionality.
  • They had standard Ansible roles for common server type such as Apache, Tomcat, etc. and used it to configure the services, harden the instance, create local users, etc.
  • Like many of the sessions, they discussed CloudWatch was insufficient to correlate events, so they had devised a system to overcome this by routing all logging to an S3 bucket and using a Lambda function to filter this to a DynamoDB table where it could be effectively used to view the wider picture. They use Snowplow, an open source data analytics.
  • They are now looking to use Lamda to create smaller API services.

Session 3

Session 3 was UEFA Campaign by James Hall, Parallax.

  • James Hall is the author of the popular jsPDF library and also the founder of a digital agency in the UK called Parallax. The agency develops applications such as the Enterprise Car Club app for car sharing, BaySentry for parking solutions, and an advertising platform used in airport business lounges for British Airways. They also wrote the export functionality in Gravit.io — an online image editor tool.
  • These guys are all over Lambda!
  • They are responsible for building the https://thisonesforyou.com/ in just three weeks! This is a website requested by David Guetta to enable 1 million people to listen and upload to his official Euro2016 track. People would sign up for marketing, listen, upload then receive personalised artwork for their input into the track.
  • This whole service was build using serverless architecture including Lambda. The decision was made to go this route as demand was unknown due to its viral nature. Traditional EC2 could have cost £10k’s /month whereas using this approach it cost just £30/month!
  • They said that GraphQL could be the silver bullet for serverless architecture.
  • They are currently working on a project to add someone into a video of a famous landmark such as Times Square in realtime. They are using Lambda to do this with parallel processing to rendering and more functions to stitch these frame

Takeaways

Once again, I feel the surge in serverless architecture and believe it is something I should be keeping a watching brief on.

Tagged with: , ,
Posted in AWS, Cloud

UK User Group Meetup #19 (30/03/2016)

Venue

Skills Matter, CodeNode, 10 South Place, London EC2M 7EB (a 5-minute walk from London Liverpool Street railway station).

A 23,000 sq ft tech events and community venue. CodeNode provides fantastic meetup, conference, training and collaboration spaces with unrivalled technology capabilities for our tech, digital and developer communities.

I found the venue to be an excellent venue for this type of event, with a large conference room with second-to-none A/V facilities, and a bar/networking area on the lower ground floor.

Attendees

An RSVP is required for attendance with names checked at the door. This strictly controlled attendance is in stark contrast to the Cambridge meetup, and a constant watch of the waiting list is advised.

450 people were due to attend this event, but there were a lot of last minute cancellations so it is estimated that there were around 400 in attendance – certainly enough to fill the large conference room (see photos).

Dress code seemed to be smart/casual. Again, organisations took the opportunity to advertise through simple branded apparel; the best had to be a NordCloud polo with the tagline “we’re recruiting” on the back inviting a dialogue for any potential candidates.

News

It was announced that the UK user group, which has grown from 12 members originally, have successfully secured sponsors (NordCloud, A Cloud Guru, and eSynergy Solutions) and a venue (Skills Matter) for the upcoming year to give consistency and stability to its members.

The RSVP for next month’s event was opened at the end of the night – so book now if interested!! I have my name down in the hope of attending once again.

Also, the London AWS Pop-up Loft opened registration during Ian Massingham’s update. The event has been extended over two weeks (18-28 April) due to popular demand. Please take a look at available training opportunities here. Again, I secured anything of interest to the team while there was availability as they typically run out quickly!

The AWS Summit will be hosted in London Docklands again and will be held over two days this year (6-7 July); one day aimed for enterprises and the other to allow more time for technical deep dive sessions.

Agenda

Introduction

Firstly there was a welcome and introduction by the user groups organiser, Ijaz Jabbar, who introduced the sponsors and briefed us on the group securing sponsors and location for the upcoming year (consisting of 5 further sessions).

Session 1

Session 1 was called “Betting on the Cloud – How AWS is helping us move faster and deal with some of our most challenging workloads”, and was presented by Michael Maibaum, Chief Architect, Sky Betting.

  • Sky Betting and Gaming is one of the countries largest online gambling operators who have been growing very quickly over the last few years. They believe that AWS will help them remain productive with a larger technical team and cope with ever larger customer numbers. This session describes their efforts to re-imagine their data platform for international expansion, dealing with 50,000 API calls/s and 250,000 concurrent connected clients.
  • This was an interesting technical insight into the world of online betting and the dependent infrastructure and systems. They told us they are experiencing 40% YOY growth and admitted that most of their infrastructure were still on-prem due to the following reasons:
    • Regulation, compliance and tax issues
    • Aspects of the architecture not being suitable (e.g. 40-core Informix servers, etc.)
    • They want to continue to leverage strong knowledge of the existing infrastructure (consisting VMware, F5, etc.)
  • They have a very spiky demand pattern based on the football schedule, and that boxing day was one of their busiest days! Therefore, scaling is a big issue for them.
  • As a workaround for high CPU load on their load balancers, they do SSL Offload using HAProxy And Stunnel
  • For one of their newer applications, Cash Out, they are using a combination of CloudFront/API Gateway/Lambda to achieve 100 transactions per second
  • They warned that you need to be careful with the AWS terms and conditions for testing as the definition of the simulated load varies between services (e.g. load testing Lambda directly is not permitted)!
  • They now have a couple of applications hosted in AWS and are hoping to launch these in time for one fo their busiest times, the Grand National – pending change control!!

Session 2

Session 2 was a presentation by Ian Massingham, Chief Evangelist, Amazon Web Services.

  • It was due to be a demo of the Amazon Lumberyard game engine. However, his new Razor Blade laptop got held up at customs!
  • The session then turned into a reflection of AWS on its 10th anniversary, along with updates on various recent announcements – we have already covered off the important ones through team email.
  • He did, however, announce that the London Loft opened registration, and the event has been extended over two weeks (18-28 April) due to popular demand.

Session 3

Session 3 was called “The next generation holiday search platform at Thomas Cook, Micro-services in the wild” and was presented by Pascal Laenen, Head of NUMO-labs at Thomas Cook Group.

  • Pascal talked about how he uses a micro-service architecture to build a server-less architecture using the AWS stack. He also provided an overview of how Javascript and NodeJs are used to build new search capabilities at Thomas Cook while explaining the flexibility and scalability of the system. He also pointed out some of the challenges he is facing by implementing the solution.
  • This was an incredibly fascinating topic for me as the solutions architect building a next-gen chemistry search engine as my last assignment. Although we didn’t have the option for Lambda at the time, it seems we were on the right path and at the bleeding edge of innovation as a lot of what we were doing is being used by Pascal and his team, including micro services, graph database, API gateways, continuous delivery, UAT testing in production through the use of weighted DNS records, etc.

Takeaways

I have since heard the feedback from others saying that last night’s meeting was one of the best so far.

Like the Cambridge meetup, the event was very developer and solutions architecture focused with little interest for IaaS fans. However, it is good to keep abreast of how the community is using AWS and any AWS product updates, as well as networking with our peers.

Tagged with: , ,
Posted in AWS, Cloud, Meetups

Cambridge AWS User Group Meeting #7 (12/07/2016)

Last night I attended the latest AWS Meetup in Cambridge which I found to be the best to date, this was due to the relevance of the topic for the night; Amazon  Lambda, and the variety and quality of the presenters.

The event was held in a fit-for-purpose meeting room at the Metail Cambridge office and hosted by Jon Green. A good select of beers and snacks were made available throughout the night.

Jon’s welcome included a brief debrief on the recent AWS London summit. One announcement which I missed during the summit was Amazon S3 Transfer Acceleration, which makes use of the nearest edge node for data upload. Jon also highlighted that re:invent, the annual global AWS conference, was selling fast…really fast! No user group discount was available, and it is unlikely this would change.

Our first presenter for the evening was Danilo Poccia, Technical Evangelist at Amazon Web Services who took us through Lambda and Amazon API Gateway in some detail. He advised that it is best practice to separate data from the application for security and scalability.

I learned that Amazon Cognito, the authorisation, and authentication service that is recommended for use with Lambda can also handle multifactor authentication (MFA). Also, that API Gateway is fronted by Amazon CloudFront, includes an optional cache, and can be used to proxy to external APIs thereby helping to consolidate and migrate end-points. It was apparent that a new generation of management services are being built using Lambda using Amazon CloudWatch logs, and events as triggers; for example to perform actions on instance start/stop, or when a system or application event is logged, etc. Danilo warned that Lambda functions must have at least the basic execution role to write log events or they would fail.

He then walked us through the integration with Lambda and API Gateway including how to structure the resource, HTTP verb and method (Lambda Function), he discussed the use of stages and stage variables to manage environmental resources; for example how a function can call a stage variable that would then use a different database, bucket, etc. depending on a given environment. He told us how well the API gateway was supported with the command line interface (CLI) which was a good way to automate the service along with Swagger.

Danilo has recently published a book on the topic, called AWS Lambda in Action: Event-Driven Serverless Applications.

He completed the session with a live demos ranging from a simple “hello world” function called through the API Gateway through to a face recognition function using photos uploaded to an S3 bucket using OpenCV.

Next up was Daniel Vaughan, Software Craftsman at EMBL-EBI who shared his experiments with Lambda. Daniel told us how he built a simple to use solution to map developer skills and interests without registration, etc. He built this using a combination of Neo4J in a docker container on a micro EC2 instance, along with the Amazon SES, SNS and API Gateway, Lambda services – all without going above the free-tier!

He gave us a live demo at the end, and his experience seemed to positive noting that Amazon could do more to make Lambda production-ready in terms of automated provisioning (e.g. Amazon CloudFormation).

Finally, Ben Taylor gave us an insight into how to use Lambda and CloudFormation to build a hacky PaaS service. With the service, a user could add their application through an S3-hosted website calling an API Gateway end-point that interacts with a DynoamoDB table which in turn calls a Lambda function via an  SNS notification to launch a CloudFormation stack.

Ben shared the following experience while using Lambda and CloudFormation:

  • The IAM permissions are hard to write
  • The IAM permissions offer no protection from malicious code; you can create new IAM policies and roles
  • Debug CloudFormation permission-related issues by trying to provision resources interactively whilst assuming the role in a different browser
  • Ben observed packaging bugs, possibly when using Python modules that are compiled locally against stuff that isn’t available in Lambda
  • Logs for different events sometimes get interleaved
  • Each attribute in DynamoDB should only be updated by one handler/source else it becomes unmanageable
  • DynamoDB is kept in sync effectively
  • The CloudWatch logging is good
  • Add debug statements for the event and your response. Logging.debug gets pretty verbose though if you also use boto3
  • Anything which runs on Amazon Linux should run on Lambda so spin up an instance for debugging
  • A Lambda function must never die or CloudFormation  will wait indefinitely
  • Avoid overlapping resources; either deploy interactively or using CloudFormation or else dependencies will break

An observation of the night was that open source is the norm and you are nobody unless you have a git hub repo URL in your signature!

Thanks to Jon, the sponsors, and the presenters for another great meeting. I look forward to the next one.

Tagged with: ,
Posted in AWS, Cloud