University Of Toronto , Communication strategy skills course
By Shahad Al Hamra
presentation slides about discussion the Amazon Growth and the reason of there success.
May/14/2020
This presentation tells about the establishment of a courier service means how business plan can be executed and related feasibility aspects of the plan.
Marketing Funnel, Customer Journey & Persona Mapping by VirtualCMO Feb 2014Shane Lennon
This is part 2 of multi-part series of frameworks, practical tools, skills and examples of tools to help marketing teams (and organizations) adopt in the digital world.
There are plenty of other approaches and some are more 360 customer relationship – we took these approaches as they fit most organizations and cultures we work with or where they are in the adoption curve and are stepping stones towards that 360 degree approach. We focused on the digital funnel for use in marketing, customer journey and persona profile mapping.
This is part 2 the basic frameworks for a digital (any/all) marketing core competency and team – the focus on being customer centric and taking an outside-in view of the market.
Carsharing, Ridesharing, Carpooling and all...Hugo Guyader
Slides used in a class on Car Sharing. I present existing studies on car sharing, ride sharing, P2P rentals and various other forms of mobility services.
University Of Toronto , Communication strategy skills course
By Shahad Al Hamra
presentation slides about discussion the Amazon Growth and the reason of there success.
May/14/2020
This presentation tells about the establishment of a courier service means how business plan can be executed and related feasibility aspects of the plan.
Marketing Funnel, Customer Journey & Persona Mapping by VirtualCMO Feb 2014Shane Lennon
This is part 2 of multi-part series of frameworks, practical tools, skills and examples of tools to help marketing teams (and organizations) adopt in the digital world.
There are plenty of other approaches and some are more 360 customer relationship – we took these approaches as they fit most organizations and cultures we work with or where they are in the adoption curve and are stepping stones towards that 360 degree approach. We focused on the digital funnel for use in marketing, customer journey and persona profile mapping.
This is part 2 the basic frameworks for a digital (any/all) marketing core competency and team – the focus on being customer centric and taking an outside-in view of the market.
Carsharing, Ridesharing, Carpooling and all...Hugo Guyader
Slides used in a class on Car Sharing. I present existing studies on car sharing, ride sharing, P2P rentals and various other forms of mobility services.
MARKETING PLAN FOR AN ANDROID APP: LIFTd ( THE CARPOOLING APP)Anjali Setiya
This Presentation is created by Anjali Setiya, G.B. Pant University of Agriculture and Technology, Pantnagar during an internship under Prof. Sameer Mathur, IIM Lucknow. This presentation describes a Marketing Plan for a new Carpooling App LIFTd : Share the ride because we share the planet.
This tutorial is the magic potion for any one who is struggling to rank high in the search results. We have come up with top 10 free seo tools, which will prove helpful in carrying out various tasks related to SEO. Here we go with - 1. Keyword research tool - Google Keyword Planner. 2. Webmaster tool - Google Search Console, 3. WordPress Plugin - Yoast SEO Plugin. 4. Analytics Tools - Google Analytics. 5. Plagiarism Checker - SmallSEOTools. 6. Page Speed Check Tool - PageSpeed Insights. 7. Competitor Analysis Tool - SEMRush. 8. Website Audit Tool - Screaming Frog. 9. Backlink Checker - Ahrefs Backlink checker. 10. Mobile friendly test tool - Google Mobile-Friendly Test. You will understand the importance of all these tools, how to use them and other similar tools.
It is a competitive analysis of the food delivery service 'Uber Eats' in terms of their online presence. Involving various aspects like SEO, SEM, Online campaigns, Social content, ORM, etc.
How is Ola Cabs bridging the gap between Supply and Demand in the transport industry? Can the Uberization model sustain itself in the long term? How do they even make money? Click this presentation to learn it all.
Digital marketing proposal new converted (1)nehagupta60895
About Startup Solutions.
Startup Solutions is a One-Stop-Consulting for all your Business and Corporate requirements.
* Website Designing & Development
* Digital Marketing
* Logo Designing/Graphics
* Google ads /PPC
* SEO /SMO
* Whatsapp marketing
* Facebook, Youtube Subscribers, Instagram Marketing
* Bulk SMS, Email Marketing
Brand Framework Strategy - Digital Marketing Campaign - One-year digital communications plan/roadmap, including Tone and Voice Recommendations - Sample Editorial Calendar - Recommended Channel Mix - Top-line Influencer Strategy - Recommended Key Performance Indicators for a company interested in expanding within the United States.
MARKETING PLAN FOR AN ANDROID APP: LIFTd ( THE CARPOOLING APP)Anjali Setiya
This Presentation is created by Anjali Setiya, G.B. Pant University of Agriculture and Technology, Pantnagar during an internship under Prof. Sameer Mathur, IIM Lucknow. This presentation describes a Marketing Plan for a new Carpooling App LIFTd : Share the ride because we share the planet.
This tutorial is the magic potion for any one who is struggling to rank high in the search results. We have come up with top 10 free seo tools, which will prove helpful in carrying out various tasks related to SEO. Here we go with - 1. Keyword research tool - Google Keyword Planner. 2. Webmaster tool - Google Search Console, 3. WordPress Plugin - Yoast SEO Plugin. 4. Analytics Tools - Google Analytics. 5. Plagiarism Checker - SmallSEOTools. 6. Page Speed Check Tool - PageSpeed Insights. 7. Competitor Analysis Tool - SEMRush. 8. Website Audit Tool - Screaming Frog. 9. Backlink Checker - Ahrefs Backlink checker. 10. Mobile friendly test tool - Google Mobile-Friendly Test. You will understand the importance of all these tools, how to use them and other similar tools.
It is a competitive analysis of the food delivery service 'Uber Eats' in terms of their online presence. Involving various aspects like SEO, SEM, Online campaigns, Social content, ORM, etc.
How is Ola Cabs bridging the gap between Supply and Demand in the transport industry? Can the Uberization model sustain itself in the long term? How do they even make money? Click this presentation to learn it all.
Digital marketing proposal new converted (1)nehagupta60895
About Startup Solutions.
Startup Solutions is a One-Stop-Consulting for all your Business and Corporate requirements.
* Website Designing & Development
* Digital Marketing
* Logo Designing/Graphics
* Google ads /PPC
* SEO /SMO
* Whatsapp marketing
* Facebook, Youtube Subscribers, Instagram Marketing
* Bulk SMS, Email Marketing
Brand Framework Strategy - Digital Marketing Campaign - One-year digital communications plan/roadmap, including Tone and Voice Recommendations - Sample Editorial Calendar - Recommended Channel Mix - Top-line Influencer Strategy - Recommended Key Performance Indicators for a company interested in expanding within the United States.
Once a team is able to automatically produce deliverables, deploy them in a test environment and automatically assess some aspects of its quality, it has all the tools in hand to be able to automatically roll out code in a production environment. While the main tools and techniques are already in place, this step cannot be taken lightly and presents its own challenges.
This presentation explains the different techniques for rolling out code in a production environment while limiting or avoiding downtime. More advanced techniques such as A / B testing or deployments rollbacks will also be covered.
Topics included in this slide:
- Using Amazon Route53 to balance traffic between two deployments.
- Pushing updates to the production environment using Amazon OpsWorks
Watch a recording of this presentation here:
Day 1 - Introduction to Cloud Computing with Amazon Web ServicesAmazon Web Services
Whether you are running applications that share photos or support critical operations of your business, you need rapid access to flexible and low cost IT resources. The term "cloud computing" refers to the on-demand delivery of IT resources via the Internet with pay-as-you-go pricing. Whether you are a startup who wants to accelerate growth without a big upfront investment in cash or time for technology or an Enterprise looking for IT innovation, agility and resiliency while reducing costs, the AWS Cloud provides a complete set of infrastructure services at zero upfront costs which are available with a few clicks and within minutes. Join this webinar to learn more about the benefits of Cloud Computing.
Reasons to attend:
- Learn the concepts of utility computing and elasticity and why these are important to a cost-effective, scalable and reliable IT architecture.
- Hear about the AWS service portfolio and the global footprint on which it is delivered and the value proposition of the AWS Cloud.
Users and Mobility are driving Change and Disruption in the traditional IT environment.
Customers and Users expect faster response times, availability to systems from anywhere & anytime.
Considering a Next Generation Data Centre to support these changes and deliver the platform you need to right source IT services and applications is a fundamental requirement.
Dimension Data provides insight into how we do this from a User Perspective, through to defining a Mobility strategy and architecting an IT landscape to support this.
Recent presentation to Infosys on HP's cloud capabilities, opportunities to partner, case studies and what HP is doing in Private Cloud to enable partner and business success in the ANZ market.
Cloud computing in Australia - Separating hype from realityRussell_Kennedy
The growth of cloud computing in Australia has been exponential and analysts forecast that cloud computing will dominate the Australian IT landscape within the next decade.
It has a reputation for delivering economies of scale, reducing overheads and driving increased efficiencies within organisations. However, the reality is that, like any IT procurement, implementing a cloud computing solution for your business still requires careful planning, effective project management, robust contracts and sound oversight.
Russell Kennedy Lawyers delve into the risks and rewards of adopting Cloud Computing in Australia.
AWS Canberra WWPS Summit 2013 - Disaster Recovery with the AWS CloudAmazon Web Services
Disaster recovery is about preparing for and recovering from any event that has a negative impact on your IT systems. A typical approach involves duplicating infrastructure to ensure the availability of spare capacity in the event of a disaster. Learn how Amazon Web Services allows you to scale up your infrastructure on an as-needed basis. For a disaster recovery solution, this results in significant cost savings.
This session will cover practical strategies for breaking down barriers to delivering content, accessing information and overcoming economics to meet student needs where they are.
Speaker: Rob Carr, Solutions Architect, Amazon Web Services
What is everything you know about change was wrong?Oscar Trimboli
Navigating the myths of change and the importance of listening beyond what you hear, exploring the difference between a fixed and growth learning mindset
Oscar Trimboli
AWS Public Sector Symposium 2014 Canberra | Big Data in the Cloud: Accelerati...Amazon Web Services
The cloud not only helps organizations do things better, cheaper, and faster; it also drives breakthroughs that transform mission delivery. This session will feature a panel of international government and university leaders who are using the cloud to take on big data challenges, and innovating in the “white space” between data silos to deliver impact.
Dynamics Day 2017 Melbourne - transform you decision makingEmpired
Do you see the full picture of your business’ health? Learn how every organisation can create, analyze and explore a range of business data, all through an easy-to-use, modern platform that turns your data into decisions using Predictive Analytics, Big Data and Power BI.
Come costruire servizi di Forecasting sfruttando algoritmi di ML e deep learn...Amazon Web Services
Il Forecasting è un processo importante per tantissime aziende e viene utilizzato in vari ambiti per cercare di prevedere in modo accurato la crescita e distribuzione di un prodotto, l’utilizzo delle risorse necessarie nelle linee produttive, presentazioni finanziarie e tanto altro. Amazon utilizza delle tecniche avanzate di forecasting, in parte questi servizi sono stati messi a disposizione di tutti i clienti AWS.
In questa sessione illustreremo come pre-processare i dati che contengono una componente temporale e successivamente utilizzare un algoritmo che a partire dal tipo di dato analizzato produce un forecasting accurato.
Big Data per le Startup: come creare applicazioni Big Data in modalità Server...Amazon Web Services
La varietà e la quantità di dati che si crea ogni giorno accelera sempre più velocemente e rappresenta una opportunità irripetibile per innovare e creare nuove startup.
Tuttavia gestire grandi quantità di dati può apparire complesso: creare cluster Big Data su larga scala sembra essere un investimento accessibile solo ad aziende consolidate. Ma l’elasticità del Cloud e, in particolare, i servizi Serverless ci permettono di rompere questi limiti.
Vediamo quindi come è possibile sviluppare applicazioni Big Data rapidamente, senza preoccuparci dell’infrastruttura, ma dedicando tutte le risorse allo sviluppo delle nostre le nostre idee per creare prodotti innovativi.
Ora puoi utilizzare Amazon Elastic Kubernetes Service (EKS) per eseguire pod Kubernetes su AWS Fargate, il motore di elaborazione serverless creato per container su AWS. Questo rende più semplice che mai costruire ed eseguire le tue applicazioni Kubernetes nel cloud AWS.In questa sessione presenteremo le caratteristiche principali del servizio e come distribuire la tua applicazione in pochi passaggi
Vent'anni fa Amazon ha attraversato una trasformazione radicale con l'obiettivo di aumentare il ritmo dell'innovazione. In questo periodo abbiamo imparato come cambiare il nostro approccio allo sviluppo delle applicazioni ci ha permesso di aumentare notevolmente l'agilità, la velocità di rilascio e, in definitiva, ci ha consentito di creare applicazioni più affidabili e scalabili. In questa sessione illustreremo come definiamo le applicazioni moderne e come la creazione di app moderne influisce non solo sull'architettura dell'applicazione, ma sulla struttura organizzativa, sulle pipeline di rilascio dello sviluppo e persino sul modello operativo. Descriveremo anche approcci comuni alla modernizzazione, compreso l'approccio utilizzato dalla stessa Amazon.com.
Come spendere fino al 90% in meno con i container e le istanze spot Amazon Web Services
L’utilizzo dei container è in continua crescita.
Se correttamente disegnate, le applicazioni basate su Container sono molto spesso stateless e flessibili.
I servizi AWS ECS, EKS e Kubernetes su EC2 possono sfruttare le istanze Spot, portando ad un risparmio medio del 70% rispetto alle istanze On Demand. In questa sessione scopriremo insieme quali sono le caratteristiche delle istanze Spot e come possono essere utilizzate facilmente su AWS. Impareremo inoltre come Spreaker sfrutta le istanze spot per eseguire applicazioni di diverso tipo, in produzione, ad una frazione del costo on-demand!
In recent months, many customers have been asking us the question – how to monetise Open APIs, simplify Fintech integrations and accelerate adoption of various Open Banking business models. Therefore, AWS and FinConecta would like to invite you to Open Finance marketplace presentation on October 20th.
Event Agenda :
Open banking so far (short recap)
• PSD2, OB UK, OB Australia, OB LATAM, OB Israel
Intro to Open Finance marketplace
• Scope
• Features
• Tech overview and Demo
The role of the Cloud
The Future of APIs
• Complying with regulation
• Monetizing data / APIs
• Business models
• Time to market
One platform for all: a Strategic approach
Q&A
Rendi unica l’offerta della tua startup sul mercato con i servizi Machine Lea...Amazon Web Services
Per creare valore e costruire una propria offerta differenziante e riconoscibile, le startup di successo sanno come combinare tecnologie consolidate con componenti innovativi creati ad hoc.
AWS fornisce servizi pronti all'utilizzo e, allo stesso tempo, permette di personalizzare e creare gli elementi differenzianti della propria offerta.
Concentrandoci sulle tecnologie di Machine Learning, vedremo come selezionare i servizi di intelligenza artificiale offerti da AWS e, anche attraverso una demo, come costruire modelli di Machine Learning personalizzati utilizzando SageMaker Studio.
OpsWorks Configuration Management: automatizza la gestione e i deployment del...Amazon Web Services
Con l'approccio tradizionale al mondo IT per molti anni è stato difficile implementare tecniche di DevOps, che finora spesso hanno previsto attività manuali portando di tanto in tanto a dei downtime degli applicativi interrompendo l'operatività dell'utente. Con l'avvento del cloud, le tecniche di DevOps sono ormai a portata di tutti a basso costo per qualsiasi genere di workload, garantendo maggiore affidabilità del sistema e risultando in dei significativi miglioramenti della business continuity.
AWS mette a disposizione AWS OpsWork come strumento di Configuration Management che mira ad automatizzare e semplificare la gestione e i deployment delle istanze EC2 per mezzo di workload Chef e Puppet.
Scopri come sfruttare AWS OpsWork a garanzia e affidabilità del tuo applicativo installato su Instanze EC2.
Microsoft Active Directory su AWS per supportare i tuoi Windows WorkloadsAmazon Web Services
Vuoi conoscere le opzioni per eseguire Microsoft Active Directory su AWS? Quando si spostano carichi di lavoro Microsoft in AWS, è importante considerare come distribuire Microsoft Active Directory per supportare la gestione, l'autenticazione e l'autorizzazione dei criteri di gruppo. In questa sessione, discuteremo le opzioni per la distribuzione di Microsoft Active Directory su AWS, incluso AWS Directory Service per Microsoft Active Directory e la distribuzione di Active Directory su Windows su Amazon Elastic Compute Cloud (Amazon EC2). Trattiamo argomenti quali l'integrazione del tuo ambiente Microsoft Active Directory locale nel cloud e l'utilizzo di applicazioni SaaS, come Office 365, con AWS Single Sign-On.
Dal riconoscimento facciale al riconoscimento di frodi o difetti di fabbricazione, l'analisi di immagini e video che sfruttano tecniche di intelligenza artificiale, si stanno evolvendo e raffinando a ritmi elevati. In questo webinar esploreremo le possibilità messe a disposizione dai servizi AWS per applicare lo stato dell'arte delle tecniche di computer vision a scenari reali.
Amazon Web Services e VMware organizzano un evento virtuale gratuito il prossimo mercoledì 14 Ottobre dalle 12:00 alle 13:00 dedicato a VMware Cloud ™ on AWS, il servizio on demand che consente di eseguire applicazioni in ambienti cloud basati su VMware vSphere® e di accedere ad una vasta gamma di servizi AWS, sfruttando a pieno le potenzialità del cloud AWS e tutelando gli investimenti VMware esistenti.
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
Crea la tua prima serverless ledger-based app con QLDB e NodeJSAmazon Web Services
Molte aziende oggi, costruiscono applicazioni con funzionalità di tipo ledger ad esempio per verificare lo storico di accrediti o addebiti nelle transazioni bancarie o ancora per tenere traccia del flusso supply chain dei propri prodotti.
Alla base di queste soluzioni ci sono i database ledger che permettono di avere un log delle transazioni trasparente, immutabile e crittograficamente verificabile, ma sono strumenti complessi e onerosi da gestire.
Amazon QLDB elimina la necessità di costruire sistemi personalizzati e complessi fornendo un database ledger serverless completamente gestito.
In questa sessione scopriremo come realizzare un'applicazione serverless completa che utilizzi le funzionalità di QLDB.
Con l’ascesa delle architetture di microservizi e delle ricche applicazioni mobili e Web, le API sono più importanti che mai per offrire agli utenti finali una user experience eccezionale. In questa sessione impareremo come affrontare le moderne sfide di progettazione delle API con GraphQL, un linguaggio di query API open source utilizzato da Facebook, Amazon e altro e come utilizzare AWS AppSync, un servizio GraphQL serverless gestito su AWS. Approfondiremo diversi scenari, comprendendo come AppSync può aiutare a risolvere questi casi d’uso creando API moderne con funzionalità di aggiornamento dati in tempo reale e offline.
Inoltre, impareremo come Sky Italia utilizza AWS AppSync per fornire aggiornamenti sportivi in tempo reale agli utenti del proprio portale web.
Database Oracle e VMware Cloud™ on AWS: i miti da sfatareAmazon Web Services
Molte organizzazioni sfruttano i vantaggi del cloud migrando i propri carichi di lavoro Oracle e assicurandosi notevoli vantaggi in termini di agilità ed efficienza dei costi.
La migrazione di questi carichi di lavoro, può creare complessità durante la modernizzazione e il refactoring delle applicazioni e a questo si possono aggiungere rischi di prestazione che possono essere introdotti quando si spostano le applicazioni dai data center locali.
In queste slide, gli esperti AWS e VMware presentano semplici e pratici accorgimenti per facilitare e semplificare la migrazione dei carichi di lavoro Oracle accelerando la trasformazione verso il cloud, approfondiranno l’architettura e dimostreranno come sfruttare a pieno le potenzialità di VMware Cloud ™ on AWS.
Amazon Elastic Container Service (Amazon ECS) è un servizio di gestione dei container altamente scalabile, che semplifica la gestione dei contenitori Docker attraverso un layer di orchestrazione per il controllo del deployment e del relativo lifecycle. In questa sessione presenteremo le principali caratteristiche del servizio, le architetture di riferimento per i differenti carichi di lavoro e i semplici passi necessari per poter velocemente migrare uno o più dei tuo container.
2 Peter 3: Because some scriptures are hard to understand and some will force them to say things God never intended, Peter warns us to take care.
https://youtu.be/nV4kGHFsEHw
Discover various methods for clearing negative entities from your space and spirit, including energy clearing techniques, spiritual rituals, and professional assistance. Gain practical knowledge on how to implement these techniques to restore peace and harmony. For more information visit here: https://www.reikihealingdistance.com/negative-entity-removal/
Exploring the Mindfulness Understanding Its Benefits.pptxMartaLoveguard
Slide 1: Title: Exploring the Mindfulness: Understanding Its Benefits
Slide 2: Introduction to Mindfulness
Mindfulness, defined as the conscious, non-judgmental observation of the present moment, has deep roots in Buddhist meditation practice but has gained significant popularity in the Western world in recent years. In today's society, filled with distractions and constant stimuli, mindfulness offers a valuable tool for regaining inner peace and reconnecting with our true selves. By cultivating mindfulness, we can develop a heightened awareness of our thoughts, feelings, and surroundings, leading to a greater sense of clarity and presence in our daily lives.
Slide 3: Benefits of Mindfulness for Mental Well-being
Practicing mindfulness can help reduce stress and anxiety levels, improving overall quality of life.
Mindfulness increases awareness of our emotions and teaches us to manage them better, leading to improved mood.
Regular mindfulness practice can improve our ability to concentrate and focus our attention on the present moment.
Slide 4: Benefits of Mindfulness for Physical Health
Research has shown that practicing mindfulness can contribute to lowering blood pressure, which is beneficial for heart health.
Regular meditation and mindfulness practice can strengthen the immune system, aiding the body in fighting infections.
Mindfulness may help reduce the risk of chronic diseases such as type 2 diabetes and obesity by reducing stress and improving overall lifestyle habits.
Slide 5: Impact of Mindfulness on Relationships
Mindfulness can help us better understand others and improve communication, leading to healthier relationships.
By focusing on the present moment and being fully attentive, mindfulness helps build stronger and more authentic connections with others.
Mindfulness teaches us how to be present for others in difficult times, leading to increased compassion and understanding.
Slide 6: Mindfulness Techniques and Practices
Focusing on the breath and mindful breathing can be a simple way to enter a state of mindfulness.
Body scan meditation involves focusing on different parts of the body, paying attention to any sensations and feelings.
Practicing mindful walking and eating involves consciously focusing on each step or bite, with full attention to sensory experiences.
Slide 7: Incorporating Mindfulness into Daily Life
You can practice mindfulness in everyday activities such as washing dishes or taking a walk in the park.
Adding mindfulness practice to daily routines can help increase awareness and presence.
Mindfulness helps us become more aware of our needs and better manage our time, leading to balance and harmony in life.
Slide 8: Summary: Embracing Mindfulness for Full Living
Mindfulness can bring numerous benefits for physical and mental health.
Regular mindfulness practice can help achieve a fuller and more satisfying life.
Mindfulness has the power to change our perspective and way of perceiving the world, leading to deeper se
Why is this So? ~ Do Seek to KNOW (English & Chinese).pptxOH TEIK BIN
A PowerPoint Presentation based on the Dhamma teaching of Kamma-Vipaka (Intentional Actions-Ripening Effects).
A Presentation for developing morality, concentration and wisdom and to spur us to practice the Dhamma diligently.
The texts are in English and Chinese.
The Chakra System in our body - A Portal to Interdimensional Consciousness.pptxBharat Technology
each chakra is studied in greater detail, several steps have been included to
strengthen your personal intention to open each chakra more fully. These are designed
to draw forth the highest benefit for your spiritual growth.
The Book of Joshua is the sixth book in the Hebrew Bible and the Old Testament, and is the first book of the Deuteronomistic history, the story of Israel from the conquest of Canaan to the Babylonian exile.
The Good News, newsletter for June 2024 is hereNoHo FUMC
Our monthly newsletter is available to read online. We hope you will join us each Sunday in person for our worship service. Make sure to subscribe and follow us on YouTube and social media.
In Jude 17-23 Jude shifts from piling up examples of false teachers from the Old Testament to a series of practical exhortations that flow from apostolic instruction. He preserves for us what may well have been part of the apostolic catechism for the first generation of Christ-followers. In these instructions Jude exhorts the believer to deal with 3 different groups of people: scoffers who are "devoid of the Spirit", believers who have come under the influence of scoffers and believers who are so entrenched in false teaching that they need rescue and pose some real spiritual risk for the rescuer. In all of this Jude emphasizes Jesus' call to rescue straying sheep, leaving the 99 safely behind and pursuing the 1.
12. “We have 50 million lines of C++ code. No, it's more than that now. I don't know what it is anymore. It was 50 million last Christmas, nine months ago, and was expanding at 8 million lines a quarter. The expansion rate was increasing as well. Ouch.” Amazon SDE, internal blog post September 2004 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
Hi my name is Jon Jenkins. I’ve been at Amazon for nearly 8 years. During almost all of that time I’ve worked for Amazon’s retail business. Basically when I say “retail business” it means everything that’s not AWS.
There’s a common misconception that the AWS services were created specifically for Amazon retail. However, nothing could be further from the truth. Amazon retail and AWS are really completely different businesses. We report into separate SVPs, sit in different buildings and operate independently. Like you, the Amazon retail business is just another customer of AWS.Today I’ve been invited here to provide you with a customer’s perspective on how Amazon’s retail web sites are using AWS to power our retail business.
The story I am going to tell you today begins in 1995 and ends in 2011. Over that period of time Amazon’s retail web sites have gone through dramatic changes in terms of technology and the architectures we use to build applications. At the bottom of every slide in this presentation you’ll see a timeline. As the story progresses you can follow along with the timeline to visualize what era of Amazon’s history I’ll be talking about.To understand our current approach toward our migration to the cloud it’s helpful to have a little historical context about Amazon retail’s technical history. So here we go with a whirlwind tour through Amazon’s early years.
Here we are in 1995 and this is the original amazon.com home page shortly after launch. Jeff B. founded Amazon in 1994 and the site launched to the public in 1995.Jeff’s basic concept was pretty simple. An internet bookstore could offer a much broader selection at a much lower price than any bricks and mortal bookseller.
Lets move forward to 1996. This is one of the earliest architectural diagrams of the Amazon retail business. Just a note, that box that says www.amazon.com is a single web server – it doesn’t represent a fleet or group of servers.Note that this is a logical diagram. Everything pictured here ran on a single DEC Alpha box that served the web site. For example, the Amazon catalog and search indexes were built into Berkley DBs that were pushed directly onto the web servers. The same host ran the ordering software and fulfillment systems.We had written our own customer written web application server called Obidos -- named after a town on the Amazon River. Humorously, the town of Obidos is located at the narrowest part of the Amazon and engineer’s liked to joke that just as Obidos is the bottleneck of the river it was also the chokepoint for software at on the retail web site.
In 1997 we added two more web servers to the fleet. We now had three Digital 4100 Unix servers. Screaming boxes for their day, they ran at 600 mhz and could hold up to four processors each.
In 1998 after the server room in our main office experienced a, how do you say, “water event”, and the floor partially collapsed we made a decision to move the web servers to a real data center in downtown Seattle.
By 1999 the original architecture was starting to show it’s cracks. This is an architectural diagram of a different sort drawn by one of our developers in the late 1990s. Obidos is represented by the South Park character Cartman. Like Cartman, Obidos had become bloated, ornery and difficult to deal with. More and more functionality had been piled into this core part of the platform and we were having trouble maintaining our pace of innovation.
By 2000 we had two distribution centers – one on the east coast and one on the west coast. It had become painfully obvious that it was a bad idea to have tight coupling between our distribution centers and our servers back in Seattle running the web site. Consequently we pushed through a project to decouple the systems powering the web site from fulfillment operations.
During 2001 we also migrated off the high-end 64-bit UNIX servers to more cost-conscious 32-bit x86-based Linux hosts. This marked the start of our move toward commodity servers and horizontal scalability.In 2001 we also took our first, fledgling steps toward a service oriented architecture. The first “service” at Amazon to be broken out from the main web server was our Customer Master Service that kept track of customer information. The service architecture was based on Tuxedo.
By 2005 Amazon had learned a lot about what a scalable web architecture should look like. This slide lists some of our takeaways were at that point.Because many of the engineers that are building the AWS utility services have spent time in the retail side of the business you will notice lots of these philosophies embodied in the various AWS services.
From here on I’ll slow down a little bit because we’ve reached the meat of what I’m going to talk about today.In March 2006, Simple Storage Service, the first AWS utility computing service launched in production. I assume most of you are familiar with S3 at this point so I won’t go into detail about what it is or how it works. However, it is worth mentioning that, contrary to popular myth, S3 was not built to satisfy Amazon retail’s internal use cases. It was designed to be a general purpose file store for the internet.Also later in the year in 2006 Elastic Compute Cloud launched into private beta.
Amazon has a strong culture of eating our own dog food so we wanted to figure out some way to start using S3 in a meaningful way as part of our retail business. We had lots of network attached storage devices and hundreds of NFS servers and since S3 is basically a file store these seemed like decent candidates to replace with S3.However, given that the amazon.com web site is the flagship of our retail business we really wanted to figure out a way to use S3 on the retail web site. But how? I mean what could we really do with just S3?
The answer is the widget pictured on this screen shot from the amazon.com web site.This is the IMDB Theatrical Release Information widget. In 2006 it appeared on almost every DVD and video detail page. The feature presents detailed information about the particular release of the movie that the user is purchasing.As many of you may be aware, IMDB is a wholly-owned subsidiary of Amazon. However, the businesses are run completely independently. We have different reporting structures, different technology platforms and sit in different buildings. Jeff B’s goal is to keep the businesses as independent as possible. He wants both IMDB and Amazon to innovate and operate without constraints imposed by the other. This structure posed some unique challenges that are specific to this particular widget.To better understand what I’m talking about lets look at the architecture of how this widget is rendered on the amazon.com web site.
This is a fairly common model of a service oriented architecture. You’ll see similar diagrams throughout the rest of this presentation. The basic goal of SOA is to provide reusable, scalable components as services that can be accessed by multiple consumers.So the way this feature worked is that the customer comes in from the left and hits the amazon.com web server residing in the Amazon retail data center. That web server issues a service call to the IMDB service to retrieve the theatrical release information. The IMDB service is really just a thin veneer over the IMDB database that stores this content. The service returns raw data to the amazon.com web server and then the web server transforms that data into HTML, and inserts it into the page that is returned to the customer.In general this is a pretty decent pattern for building web site features at Amazon. However, in this case it was problematic.
First, this architecture resulted in coupling between the Amazon and IMDB businesses. You see that actual code that transforms the raw service data into HTML lives on the amazon.com web servers. That means that if the IMDB team wants to change the look and feel of the widget or the data presented in the widget they have to adhere to the Amazon release schedule.Second, there are stringent runtime latency requirements for any content appearing on the Amazon web site. In this case, the IMDB team wasn’t able to consistently meet those latency requirements for this feature. Additionally, there has to be coordination when it comes to scaling too. As the Amazon retail business grows we would need to keep IMDB in the loop so they could scale their service appropriately. Even worse, let’s say Amazon is planning to have a sale on DVDs that will cause a big spike in traffic to these types of pages. We would have to make sure that IMDB was informed in advance so their service wouldn’t collapse under the load. Third, in 2006 the IMBD and Amazon teams used different service frameworks. That meant that it was difficult to integrate the two components. Furthermore, in this architecture when there is a change to the service interfaces the client needs to update it’s software to account for those changes. All of this caused big problems in terms of evolving the feature over time.The solution we came up with was to use S3 as a service. Today this might seem fairly obvious. After all, there’s lots of talk now-a-day about REST services and loose coupling. But back in 2006 lots of people were still building things in a RPC style.Anyway, what we chose to do was use S3 as a service. The IMDB team would insert raw HTML into the S3 bucket and at runtime the amazon web server would simply pull that HTML out of S3 and concatenate it into the web page.
Here’s a diagram comparing the new architecture to the old.At the bottom you can see that the customer traffic comes in from the left. It hits the amazon.com web server. But now we’ve built a generic S3 HTML puller component. That component basically maps a widget to an S3 bucket. The files in that bucket are named based on the ID of the product. So at run time the web server simply goes to the S3 bucket, pulls the file with the right name and concatenates it into the web page.I’ve purposely drawn the IMDB part as a black box because, frankly, it is a black box from the perspective of the Amazon web site. I have no idea how IMDB gets the content into that S3 bucket and I don’t really care. For all I know it’s a room full of monkeys manually typing in the content – it doesn’t matter.
So how did things work out? Here are the results from this change.First, we were able to serve pages to customers faster because S3 had a lower latency than the IMDB service.Second, IMDB doesn’t need to think about scaling at all. S3 is massively scalable and as the Amazon web site traffic picks up S3 bears the brunt of that load so we no longer need to coordinate traffic forecasts with IMDB.Third, the CPU utilization on the Amazon web servers was reduced. In the new model the web servers are simply concatenating pre-formed HTML into the page, not transforming raw service data into markup. This save a lot of CPU and means we can server more web pages per host.In the previous slide you’ll note that this model results in fewer runtime dependencies for the website. Specifically, where before we had both an IMDB service and database now we only have S3. Because we can use this same model to replace lots of other services with the S3 we can greatly reduce the number of dependencies which results in higher availability.The release model for the IMDB team is greatly simplified in this model. They can push new content to S3 whenever they want without any constraints imposed by the Amazon retail team. In fact, there’s a neat model for them to evolve their feature. They can simply put a new version of the feature into a new S3 bucket and we can flip which bucket the web servers are pulling the content from. If there is a problem with the new content we can instantly flip back to the old bucket.Finally, in 2006 the Amazon web site didn’t make a lot of use of AJAX. However, in retrospect this architecture set us up perfectly for AJAX features on the website. The browser can just as easily concatenate the HTML served by S3 into the web page as the web server can. This allows for a lot of flexibility in terms of how the web page is assembled without any underlying change in the storage.
That’s a pretty simple, albeit power way that we started using AWS services on the Amazon retail web site.But now lets jump forward to 2008 and look at something a little more complex.
In 2008 Amazon used several external monitoring services to measure the performance and reliability of our website so we could understand what our customer’s experience was like. There are lots of these services from different vendors but many of them turn out to be really expensive to use at scale. Additionally, since most of these services were black boxes we were never really sure what they were measuring.This is a screen shot of an internal application we built called “Client Experience Analytics”. The purpose of the application is to do external rendering of Amazon web pages in a real browser, save screenshots and metrics about the pages, and push the data into our metrics and alarming systems. Basically, it runs on an external network and provides s with a real perspective on what our customers experience when they use the amazon.com web site.
We knew we wanted to build an application like this, but there were several challenges.First, we knew the system would have a lot of moving parts. Rendering web pages from lots of remote sites and saving all the performance data is a complicated, workflow-based task. There are many components each of which have to be reliable and scalable.Second, the application had to do the actual page rendering in remote data centers. When I say remote data centers I mean data centers that are not on the Amazon retail network fabric. Also, the more geographical diversity we could get in terms of these rendering agents the better.We suspected that the application would be pretty popular after we launched it and we wanted to be able to scale it up quickly and easily.Finally, we were given a development team of only two people and just a few months to produce the initial version of the software. That meant we had to find pre-built or reusable components to meet our timeline.In principle the solution was pretty simple. We would try to use as many of the AWS services as possible to avoid writing functionality ourselves.
This is an architecture diagram of the Client Experience Analytics application. I’m not going to walk you through every component in the application, but I do want to highlight the places where we made use of AWS services.The horizontal box at the top-center is Simple Queue Service. We push all the pages that need to be rendered into SQS. Below that the three small boxes represent our fleet of EC2 hosts that pull work out of the queue and then render the pages in a real browser – IE, Firefox, etc. Since these EC2 hosts are running in the AWS network which is totally different than the Amazon retail network. That means we get real client side performance data from the EC2 hosts.The EC2 boxes record the data they collect into three separate repositories. Screen shots of each page a pushed into S3. This allows our internal users to see the page exactly as it was rendered by the browser. Metadata about the requests and performance data is written to RDS and SDB.At the top you can see that we also pump data into CloudWatch so that we can easily produce graphs for our users.On the far right you see an orange arrow. This is our notification system where alarms are propagated. At the time we built this application Simple Notification Service didn’t yet exist. We will replace our own custom notification system with SNS in the future.
So what were the results of this effort?Well, first we were able to deliver a complex application on a very short timeline with only a couple development resources. It would have been impossible to do this without the pre-built services that AWS offers.Normally an application like this would require the negotiation of several additional co-lo agreements. I don’t know about your businesses, but at Amazon that could take many month and would require coordination with finance, tax, infrastructure, security and other departments throughout the company. But because EC2 is present in several different geographies we were able to deploy a global application effortlessly.Because the EC2 hosts are on an external network we get accurate client-side performance statistics.With traditional external monitoring solutions you can’t
OK, but the thing everyone is always asking about is our main web server fleet for amazon.com. What are we doing to migrate it to the cloud?One of the main benefits that people often talk about with the cloud is the ability to dynamically scale capacity up or down based on demand. The idea is that when you don’t need all your capacity you can save money by releasing it back to the cloud. And, in theory, the web server fleet should be the poster child for this dynamic capacity story.Additionally, because we are an e-commerce site with lots of credit card interaction this part of our infrastructure has to be completely PCI compliant. There is simply no way we can risk losing this certification.So lets step forward one more year to 2009.
This is a typical weekly graph of traffic to the amazon.com web site. As you’d expect, there are peaks of usage during the day and troughs of usage at night. The variation from day to day is pretty consistent over the course of a week. If any of you run web sites you probably see traffic patterns very similar to this.Anyway, if the cloud can save you money by providing flexible capacity this would seem to be the ideal case for it.
Let me explain a little further.I spent several years of my life trying to figure out where to draw the red line on this graph. The line represents the expected maximum traffic plus a 15% buffer to account for any unexpected spikes. I ultimately got pretty good at predicting where we needed to draw the line and how much capacity amazon.com needed to purchase – at least assuming there were no unpredictable spikes in traffic due to product launches, unannounced sales or other external factors.The problem is that there’s a lot of area between that blue line and the red line. All of that area is web server capacity I’ve purchased but am not using.How much is going to waste?
In this slide the blue area of the graph is the percentage of the capacity we are actually using and the red area is the capacity we’ve purchased but is going to waste to the traffic cycle and the safety margin.You can see that during a typical week nearly 40% of the capacity we purchased was not being used. And, frankly, we did a lot better job of this than a lot of companies. It’s not uncommon for server fleets to be wasting more than 50% of their total capacity.
But really the problem is worse than this. This graph shows a typical traffic pattern for the month of November on the amazon.com web site. You see, we don’t just have a daily traffic cycle. We also have an annual traffic cycle that revolves around the retail calendar which peaks in the fourth quarter each year.As you can see in this graph amazon.com ramps way up over the course of November. Again, the red line represents the expected peak plus 15%.
When we calculate the area on this graph you can see that during November amazon.com was wasting about three-quarters of it’s available capacity. Obviously, wasting a lot of capacity is not a consistent with our goal of offering customers the lowest possible prices on the items we sell. So there’s a huge business opportunity here if we can figure out a way to move the web server fleet to the cloud and scale it dynamically.But the problem is really a lot worse than what I’m making it out to be here. Depending on how long it takes to procure and provision those servers I may have to order them months in advance of when I need them in November and I have to pay for them the moment they hit my data center even if they aren’t yet serving traffic. And, of course, those hosts are still going to be sitting around after the holiday season passes even though I don’t need them any more.
So the problem is pretty obvious in this case. We are wasting lots of money in underutilized capacity.Additionally, unexpected spikes in load are challenging to deal with. If we can get spare capacity in time we have to bring up our server software on it under duress which can lead to mistakes.Finally, scaling is often non-linear in this model. On amazon.com we tended to scale in units of racks not individual servers. This means that if I only needed a few additional servers I would tend to scale in groups of 40 or so just to keep things simple. Furthermore, at some point I’m going to fill up all the rack positions in my existing data center and now adding one more until of scalability is going to require me to build a brand new data center. That will cost millions of dollars and require serious lead time.[Say this next part kind of jokingly.] The solution is really simple. It even fits on a single line in a PowerPoint deck. We just need to migrate the entire web server fleet to AWS. Hmm, well, that’s easy. But we did come up with a plan for how to do it.
This slide is the architecture we came up with to transition the amazon.com web server fleet from what we call “classic” capacity to EC2.The customer traffic comes into the Amazon retail data center from the left and hits one of our existing production load balancers. What we did was to hook up our amazon.com data center to the AWS data center via the Virtual Private Cloud product. VPC makes AWS look just like your own data center from a networking standpoint. So the load balancer passes the request off to one of the web servers running in our EC2 clusters. You’ll note that we have web servers running in multiple availability zones – remember, you still have to architect for availability as you move to the cloud.The other nice thing about VPC is that those web servers can talk back across the VPC boundary to services and databases running in the Amazon retail data center to get any content that they need to compose the pages. Ultimately the page gets built on the web server and it is passed back across the VPC boundary to the Amazon retail data center and to the customer.I’m really proud about this next slide.
[If you deliver the next few lines correctly there will often be some applause from the audience in this section.]This date, November 10, 2010 is the day that we turned off the last physical web server for amazon.com in the Amazon retail data center. Since that date every single web page on the amazon.com web site has been served by our fleet of EC2 web servers. In my opinion this is a pretty remarkable accomplishment given that only a few years earlier we ha a tightly coupled, monolithic, C++, Cartman architecture.I’m pleased to say that amazon.com site availability in Q4 of 2010 was the best it had ever been and we were easily to handle several high profile product releases, big sales and a huge growth in the business over all.
So the results here are pretty obvious. We succeeded in moving our entire web server fleet for amazon.com – thousands of hosts – to the cloud.We are now in a position to dynamically scale our capacity up or down to meet customer demand. And we can scale up or down in units as small as a single host. I no longer worry about running out of space in a data center or having to build a new data center. I supposed someone over at AWS must worry about that sort of thing, but it’s not my problem.Finally, traffic spikes don’t cause nearly the problem that they used to. If we see an unexpected increase in load we simply provision more EC2 servers into the fleet. Of course, we can return them as soon as the traffic spike passes.
We’ve moved into 2011.There is probably no piece of our infrastructure that has proven to be more problematic over the years than databases. We’ve constantly struggled to get our relational data stores to scale at a pace that can keep up with the growth of the business. So I thought it might be interesting to take a look at a somewhat novel approach we’ve implemented using AWS to deal with a database scaling issue.
One of the promises that Amazon makes to its customers is that you will always have the ability to review your complete order history. This screen shot shows my order history review page. I’m a bit embarrassed that I’ve only been a customer of Amazon since 1999. The Amazon old-timers at Amazon like to point out that I was pretty late to the e-commerce game. On the right in the red circle you can see that I can select any year to view the orders that I’ve placed during that year.As you might imagine, over the course of the company’s history Amazon’s retail customers have placed billions of orders. A few years ago we made an interesting discovery. Most discussion around database scaling revolves around how many transactions per second your database must process. However, in the process of trying to understand our infrastructure spending we stumbled upon the fact that there was a factor even more important than TPS -- the cumulative amount of data stored by the database. If you think about it this makes sense. As you get more and more data into your database it puts increasing memory pressure on the hardware. By reducing the accumulated data that a database host has to store it can dramatically improve the ability of the data store to scale because more of the transactions can be served directly out of memory without hitting the disk.
This slide shows a high level view of the order retrieval service at Amazon. Obviously, it’s like pretty much every other service oriented pattern we’ve seen so far.
Here you can see the two most common ways that people approach scaling this type of architecture.In pattern 1 you simply buy bigger and bigger database boxes to handle the increased amount of data you need to store or transactions you need to process. In pattern 2 you shard your data across more instances to cope with the same factors.Pattern 1 gets expensive as you move into more and more exotic hardware platforms, and at some point you will hit a wall where there just isn’t a big enough server to handle the load. Pattern 2 adds complexity in terms of failure cases, replicaiton and handling inter-server communication.Ultimately neither one of these patterns makes us very happy.
So the problems are pretty straightforward. First, the cumulative data stored not just the transactions per second has a major impact on our ability to scale. It seems like we should be able to take advantage of the fact that lots of the older order data is infrequently accessed and that customers might be willing to wait a bit longer to get that data.Second, we don’t really like any of the conventional approaches toward dealing with the challenges of scaling databases. Each carries it’s own pitfalls.Third, the most expensive “classic” hardware in the Amazon retail server fleet are our database boxes. Our DBAs and DB engineers require us to use high-end SCSI drives, ECC memory and other expensive components or they won’t support our applications. To the degree that we can reduce the use of this type of hardware we can save lots of money.The solution we came up with is to create a tiered-storage solution using AWS. That solution takes advantage of the fact that there are really two types of data in our order database. First there is the highly dynamic, constantly changing influx of new orders that customers are constantly checking to view their delivery dates. Then there’s the set of older orders that are immutable – the items have been delivered and too much time has passed for the customer to return the item.
The architecture of the solution looks like this. We denormalize and move orders from our relational order database to an S3 bucket when those orders move into a “closed”, or immutable, state.
The results of this cloud implementation are pretty amazing. The team is taking a phased approach to migrating the cold order to S3.So far they’ve moved more than 670 million orders to their encrypted S3 repository. That’s more than 4 terabytes of data. I checked with the team a few weeks ago and they predict that within the next year or so they will have in excess of 50TB in their cold order store.By removing all of this cold data from Oracle it’s dramatically reduced the amount of money we need to spend on the ordering database instances. Although S3 is slower than pulling these orders from the database that performance delta is imperceptible to the customer.Finally, by reducing the footprint of these databases we can now start thinking about ways to move the remainder of the data into one of the AWS database solutions.
So here we sit in 2011. The applications I’ve described today are only a small fraction of the systems that the Amazon retail business has migrated to the AWS cloud.We now push all of our server logs to S3 for long term storage. We backup our data bases to S3. We store our source code in the cloud. Our build systems use EC2. And the list goes on and on. Throughout our process of migrating to AWS we’ve learned a lot of lessons about how to successfully move from what we call “classic” architectures to cloud architectures. So I’d like to take a moment to step back and reflect on some of these meta-lessons. The lessons come in two groups – business lessons and technical lessons.
The first set of takeaways from the last five years have to do with how we run the Amazon retail business in light of the cloud.First, I spend significantly less time worrying about capacity planning than I used to. Dynamic capacity in EC2 and the bottomless pit of storage that is S3 means that the consequences of inaccurately forecasting demand are low. This allows me to focus on features that my teams are building instead of running infrastructure.Second, I have far fewer conversations with finance. They don’t gave to deal with the big cap-ex requests that I used to submit and they understand the dynamic scaling model well enough to know that it allows us to run much leaner than we used to. Certainly they pay attention to the bill we get from AWS – yes, we get a bill just like you – but overall my conversations with finance are far less contentious.I get more innovation out of my organization now that we’ve started using the cloud. The Client Experience Analytics application is a good example. If I would have had to negotiate half-a-dozen co-lo deals to get that project off the ground I never would have let them do the project. Because I say “No” less often the developers are happier.One nice thing is that I get to take credit for the AWS price reductions. When the finance guy asks me why the AWS bill went down in a given month I simply make up a story about how we focused on efficiency.It is important to think about any regulations and compliance requirements that your application may have. For instance, we have to ensure that there is absolutely no chance that we will run afoul of the PCI compliance requirements because it would be devastating for the retail business if we lost that certification. The good news is that we’ve been able to build lots of compliant applications using AWS. Just be sure to work with your internal audit, legal and security teams to verify that the implementation is acceptable.Finally, a personal favorite for me is that I don’t have to worry about lease returns any more. Prior to moving to the cloud I used to have to deal with lease returns every single year. It would take a lot of time from my project managers and devs to deal with the swap out of the hardware going off lease for the new hardware that was coming in.
The second set of takeaways is more technical in nature.The first is that it’s a good idea to pick a couple simple applications to migrate so you can gain some initial experience with the cloud. We chose that IMDB feature on the detail page because it was a non-critical feature that only appeared on a subset of our web site. The approach to cloud-ifying it only involved one service and the architecture was very straightforward.Second, you don’t have to migrate a component in one-fell swoop. Figure out the end-state that you want to get to and then come up with an incremental plan that allows you to systematically get to that end state. A good technical program manager can be a big help in this regard.As you migrate your first few applications you’ll likely discover some reusable components that will be useful for migrating future applications. In Amazon retail an example was an encryption layer that sat on top of S3. This saved time because every developer didn’t have to reinvent the wheel each time. Be on the lookout for these types of generic components and support them across your organization.You are going to be charting some new ground in terms of security as you migrate to the cloud. My experience has been that you can either engage security as partners or you can treat them as your enemy. We engaged our security team very early and involved them in our design process. The result was that they felt invested in helping us figure out ways to accomplish our objectives and they played an important role in improving the final solutions we came up with.As you’ll recall from an earlier slide by 2005 we had come up with some basic engineering principles that we knew we wanted to follow going forward – decoupling, simplicity, service oriented architectures, etc. Look for opportunities to migrate to AWS in a way that furthers your overall architectural agenda. It’s pretty obvious that each of the examples I presented today aligned with that core engineering agenda.And finally, understand that the cloud is not going to make up for sloppy engineering. You still need to think about availability and performance. This means understanding the dependencies for your applications, building fault tolerant systems and learning about concepts like availability zones and redundancy models.
Hi my name is Jon Jenkins. I’ve been at Amazon for nearly 8 years. During almost all of that time I’ve worked for Amazon’s retail business. Basically when I say “retail business” it means everything that’s not AWS.