WebProfIT Consulting
  • Home
  • Cloud
  • SQL
  • Big Data
  • Contact Us
  • RSS

Categories

  • AWS (1)
  • Big Data (1)
  • Cloud (1)
  • SQL (4)
    • Microsoft SQL Server (3)
  • Tips and Tricks (2)
  • Tutorials (1)

Recent Comments

    • Home
    • Migration To Cloud

    Migration To Cloud

    Reasons to migrate to Cloud

    Private, public or hybrid?

    When we say “migration to cloud”, we mean public or at least hybrid cloud, i.e. service partially or completely outsourced to an external provider. Setting up own private cloud for an average organization usually doesn’t make much sense unless the cloud itself is the product the organization offers. Most of public cloud pros become private cloud cons. Long story short, a real private cloud (i.e. 24/7 available service with virtually unlimited resources) either requires huge investment into hardware, or what they call “cloud” is not actually anything close to it, because almost every time when you need to provision a few servers you run either out of disk space, or memory, or CPU power, or network capacity, or full time employees, or something else.

    Saving money

    Basically, the main and almost the only reason to migrate to Cloud is saving costs in one form or another. Once the server is in the Cloud, you don’t pay any more for:

    1. hardware maintenance, upgrades and repairs
    2. electric power
    3. cooling and ventilation
    4. insurance

    In some pricing models you also partially or completely forget about:

    1. software licensing fees
    2. IT service expenses

    Hardware utilization

    Purchased hardware often remains under-utilized for a long period of time. Then during relatively short period it works perfectly as designed and was purchased for, and then, with business growing, it becomes overloaded, and more hardware investment required. It’s a cycle of life. Once your servers are in Cloud, you forget about this problem. Basically, you delegate all or most of it to the Cloud provider and pretty much don’t care what kind of magic do they do in the data centers to keep technology up and running.

    AWS - launch a aserver!

    Definitely you’ll never have to think about buying hard disks again, putting them in the racks, wiring, replacing etc. A couple of clicks, a few minutes of waiting, and you get a virtual supercomputer equivalent for just a few cents per hour. If you want the same thing in metal, you have to shop for it first, then go through the procurement department, attend a few meetings, get a few approvals, wait for payment, wait for shipment, pay big bucks to the technical guys for rack mounting, testing, installation of operating system, middleware and applications, security and compatibility scans, then go through all formal procedures of new service activation, etc., etc. – you name it. Half a year project – easy. Same thing in cloud – 2 minutes, because most of this time and money consuming stuff is delegated to the cloud provider, virtualized far ago and fully automated.

    This brings us to another important reason of moving to the Cloud:

    Saving time

    Cloud help desk - your laptop will not be lighter

    Saving time is another form of saving money. Fancy name for the same is “agility”, which means that in the Cloud your team can be more productive. For instance, you need to process about hundred terabytes of log files from store RFID sensors to figure out customers’ preferences. Nowadays your guys don’t have to read all Apache documentation and spend their expensive time trying to figure out how to fire up a Hadoop cluster: they can just roll it over in AWS and focus on developing the actual analytics bringing the company extra money.

    In the Cloud there is almost no waiting. You need a new environment? Your guys can start it up in a few minutes, hours tops. Definitely not weeks or months. New working machine, with development tools installed, a new SQL database, or a bunch of storage, high-performance cache, proxy server, load balancer? All take just a few minutes. How about something more complex, like a SharePoint farm? Depending on the complexity that could take some time to fill out all the specs, but still it should be less than an hour. And if you screwed it up and want to redeploy with other parameters, second time it will be much faster. You can create a template out of that deployment and then use that template to clone it into QA, UAT or production environments. But this is already another reason of moving into Cloud:

    Flexibility

    Computer flexibility

    Cloud providers offer overwhelming number of options covering all possible needs. You can provision a server with literally any number of CPUs, memory size, disk storage, operating system. Usually providers have standard pre-configured (to save your time!) options from something like “tiny” to “extra-extra-large”, but there are always custom types. GPU if you need computing power, fast SSD storage, unlimited size NFS, very cheap backup storage.

    Your website begun to receive traffic and constantly running out of memory? Not a problem, upgrade your LAMP server from tiny to small – 2 minutes and it’s in good shape again.

    Disk filled up with media files? Not a problem – just increase volume size. You have no idea how much disk space you need? Not a problem – you can get a storage bucket of virtually unlimited size and pay only for what you actually use.

    You want to separate admin website from client-facing? Not a problem, we can move the shared database to another server and connect two separate websites to it, one for content management people, another (R/O) – for clients.

    One web server for clients is not enough, you need many of them on different continents? Not a problem, you can clone it to different availability zones.

    Now database is a bottleneck? Not a problem, you can upgrade the database server or make a database cluster to share the load. Load balancers can keep an eye on all your infrastructure and forward new requests to a healthy machine.

    You don’t have time to maintain all this virtual zoo in shape because have better things to do? Not a problem, you can automate provisioning of a new VM into the cluster when some conditions happen, for instance average CPU utilization raises up to 50%.

    Here we come to another reason of moving:

    Easy scaling up or down, vertical and horizontal

    Kirk surrounded by tribbles

    Suppose your online selling system needs one server all year, but right four weeks before Christmas traffic begins to grow, and peaks at 200 times more than usual just a few days before the holiday. One server is completely overwhelmed with transactions and barely alive, so instead of increasing sales you get nothing, simply because the server is too slow and disappointed customers are not able to complete a purchase. Your analytics show that neither two, nor three, not even twenty servers would not be able to keep up and serve all your Christmas customers.

    What do you do: buy 30 servers for your website just for a few days? Even if it was possible to deploy them all in a rack on such short notice, what are you supposed to do with them after the Boxing Day? Turn them off till next year? Return to the store?

    Sounds ridiculous, but not if you are in the Cloud. Two things you can do:

    1. Upgrade your server: add CPUs and/or memory. Basically, usually that means switching to a more powerful type, for example in Amazon from tiny to small, from small to medium, from medium to large etc. This approach is called “vertical scaling”. It may be a good approach in case if peak time factor is not very high, like 2-10.
    2. In our Christmas situation though, with a peak factor close to 100-200 or more, “horizontal scaling” approach might be more handy: in just a few clicks you can set up an  auto-scaling group in AWS or Azure. It will automatically start as many copies of your website (optimized in size for your regular load), as required, based on some simple metric like CPU utilization. But what is even more important, it will automatically shut down and destroy all extra clones next minute after you don’t need them any more, and bring your application to the usual one server deployment. Load balancer will automatically distribute traffic between healthy instances of your web server.

    Well, we are deeper in the forest now, but if you think about it, it’s all again about saving costs, time and making extra money. So is one more very important reason for moving:

    Outsourcing IT Services

    Outsourced Santa
    How much do you spend on such things like backups, monitoring, alerting, capacity planning, high availability, disaster recovery planning and tests, encrypting and archiving data?

    Maybe nothing, but that means you don’t have them, and if tomorrow a tornado blows away you office or data center, you are out of business.

    Imaging having all of those and more automatically included in the package when you setup a new server? Most of serious cloud providers do it, sometimes for a very reasonable extra fee, sometimes as a mandatory part of a deal. For instance, deployment of database cluster in different availability zones automatically solves disaster recovery problem.

    As a matter of fact, in the Cloud you have easy access even to services not available to you otherwise.

    Access to services which are not available otherwise but in Cloud

    Examples of those, just to get an idea:

    IBM Watson Services

    • AlchemyData News – provides news and blog content enriched with natural language processing to allow for highly targeted search and trend analysis. Now you can query the world’s news sources and blogs like a database.
    • AlchemyLanguage – collection of APIs that enable text analysis through natural language processing.
    • Conversation – add a natural language interface to your application to automate interactions with your end users. Common applications include virtual agents and chat bots that can integrate and communicate on any channel or device.
    • Dialog – create conversations any way you like to answer questions, walk through processes, or just to chat!
    • Document Conversion – transforms HTML, PDF, and Microsoft Word documents into normalized HTML, plain text, or sets of Answer units that can be used with other Watson services.
    • Language Translation – identify the language text is written in. Translate text from one language to another for specific domains.
    • Natural Language Classifier – interpret natural language and classify it with confidence.
    • Personality Insights – enables deeper understanding of people’s personality characteristics, needs, and values to help engage users on their own terms.
    • Retrieve and Rank – enhance information retrieval with machine learning.
    • Speech to Text – low-latency, streaming transcription.
    • Text to Speech – synthesizes natural-sounding speech from text.
    • Tone Analyzer – helps users understand the tones that are present in text.
    • Trade-off Analytics – helps users make better choices to best meet multiple conflicting goals, combining smart visualization and recommendations for trade-off exploration.
    • Visual Recognition – understand the contents of images. Create custom classifiers to develop smart applications. Create custom collections to search for similar images.

    Google Big Query

    BigQuery is Google’s fully managed, petabyte scale, low cost analytics data warehouse. BigQuery is serverless, there is no infrastructure to manage and you don’t need a database administrator, so you can focus on analyzing data to find meaningful insights, use familiar SQL, and take advantage of Google pay-as-you-go model. BigQuery is a powerful Big Data analytics platform used by all types of organizations, from startups to Fortune 500 companies.

    Amazon Lambda

    node-lambdaAWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security. Lambda runs your code on high-availability compute infrastructure and performs all the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code and security patch deployment, and code monitoring and logging. All you need to do is supply the code.

    Cons

    Possible downtime

    No cloud provider, even the very best, would claim immunity to service outages. Cloud computing systems are internet based, which means your access is fully dependent on your Internet connection. And, like any hardware, cloud platforms themselves can fail for any one of a thousand reasons. Can your business absorb a prolonged bout of frequent outages or slowdowns? Which of your business processes can be delayed or halted if the service provider goes down?

    Privacy and security concerns

    it-is-not-boring-up-here

    The cloud model has been highly criticized for risk of data privacy and security breaches. In Canada and many other countries throughout the world, there are numerous privacy laws at both the Federal and local (state or provincial) levels. In addition, entities in regulated industry sectors, such as financial services and health care, have specific rules and regulations governing their customers’ and patients’ data storage, and the communication, hosting, transfer and disclosure of related information, as well as the outsourcing of services to third parties, in particular in foreign jurisdictions. The complexity of legal compliance is sometimes overwhelming and many organizations have mistakenly assumed, or simply taken the position, that they cannot use a cloud service. However, that is not necessarily the case. For example, the Canadian federal Personal Information Protection and Electronic Documents Act (PIPEDA) does not prohibit cloud computing or cross-border data transfer by private sector entities in most industries, even when the cloud service provider (or a part of the cloud service provided) is in another country (BC and Nova Scotia public sector statutes prohibit cross-border outsourcing or cloud services unless certain exceptions apply. Numerous guidelines and public sector policies must also be followed). However, PIPEDA (and other private sector privacy laws and outsourcing guidelines) establish rules governing use of the cloud and data transfer — particularly with respect to obtaining consent for the collection, use and disclosure of personal information, notification of cross-border information transfer, securing the data, and ensuring accountability for the information and transparency in terms of practices.

    By using cloud-based computing services, organizations must make sure that, before they hand over their data to a cloud service provider, their organization and the cloud service provider have structured their operations and their respective rights and obligations under their agreements such that they are legally permitted to do so, that the data will be safely maintained with access to only those people who have the appropriate legal rights, and that the organization remains in control of their data to the extent required by law. This may mean that consumer-targeted cloud services are not amenable for use by large enterprises or those with sensitive information needs, but it does not mean use of the cloud (or a variant of cloud services) is out of the question.

    Vulnerability to attacks

    In cloud computing, every component is potentially accessible from the Internet. Of course, nothing connected to the Internet is perfectly secure and even the best teams suffer severe attacks and security breeches. But since cloud computing is built as a public service and it’s easy to run before you learn to walk. No one at AWS checks your administration skills before granting you an account: all it takes to get started is a valid credit card.

    Limited control

    To varying degrees (depending on the particular service) cloud users have limited control over the function and execution of their hosting infrastructure. Cloud provider EULAs and management policies might impose limits on what customers can do with their deployments. Customers are also limited to the control and management of their applications, data, and services, but not the backend infrastructure. Of course, none of this will normally be a problem, but it should be taken into account.

    Platform Dependencies

    Deep-rooted differences between vendor systems can sometimes make it impossible to migrate from one cloud platform to another. Not only can it be complex and expensive to reconfigure your applications to meet the requirements of a new host, but migration could also expose your data to additional security and privacy vulnerabilities.

    Our Deliverables

    Assessment of Business Requirements

    Directly working with your stakeholders, we collect and carefully document information about your:

    1. applications
    2. infrastructure
    3. data and work flows
    4. archives

    At this stage, one way or another, we’ll have to get answers to the following questions:

    1. What is the business goal for moving to the cloud?
    2. How important is availability of applications? How directly will loss of availability affect revenue? What is the cost of downtime (per-day, per-hour, per-minute)?
    3. What reporting information is required?
    4. What is the expected useful life of the applications and data?
    5. How often will IT need to upgrade the applications?
    6. How intensive are data update processes? Do they fall into “Big Data” category?
    7. How will any planned mergers or acquisitions affect this application?
    8. What is desired timeline for the move?
    9. What regulations must applications and data comply with?

    Obviously, we’ll need an authority to talk to your users and decision makers to get this information. It will be an input to the next stage:

    Technical Specifications and Risk Profile

    Complex business process flowAt this stage we carefully investigate and consider different factors which may or may not affect the move, such us:

    1. Load on the network. Migrating an application to the cloud involves careful solutioning for known cloud-networking challenges, such as TCP Incast and dynamic network resource allocation. Even interfacing with the public cloud presents issues, including the often overlooked latency of Internet.
    2. Adjustments to be made to prepare your applications to work in Cloud, if any. Line-of-business managers often assume that anything can live in the cloud, especially already virtualized applications, which is not necessarily the truth.
    3. Migration costs. For example, many financial institutions currently run their transaction processing on mainframes. Cost of rewriting code for the legacy applications to modern platforms may be the entire show stopper, or Business may decide to leave those applications in place.
    4. Testing applications with real data. For example, the early phases of the Development Operations (Dev Ops) life cycle often carry low risk; running code with test data doesn’t seem to reveal any problems. But the risks increase when you begin testing with production data volumes and velocity.
    5. Discovering data locations. Sometimes it’s very important to understand what data is stored where. We need to detect data breaches as soon as possible by revealing anomalous data flows, to minimize the impact from the move.
    6. Considering tenancy and compliance. Certain workloads can easily run in a multi-tenant environment; others will require single tenancy or even on-premises location for proper assurance or regulations/legislation compliance. For instance, in the case of an e-commerce solution, certain parts of the solution will deal with data that your organization must protect. In such cases, an organization’s choice of cloud model may be limited.

    Having all this information carefully documented and presented, we move to the next stage:

    Matching your business requirements with existing cloud offerings

    business-requirementsAt this stage we use our expertise to find a perfect match for your organization. We constantly scan the market of cloud offerings and update our knowledge base taking into account the following factors:

    1. Pricing. Some cloud providers offer very transparent and well-documented pricing models, some – exactly the opposite, and to get a realistic estimate of cost maybe a challenge, requiring a lot of previous experience.
    2. Expertise. We deal with industry experts and established brands, familiar with almost any software that your company wants to use. The real cloud expert should have anticipated what most enterprise users need.
    3. Reliability. How reliable is the cloud vendor? Can the cloud servers consistently handle robust bandwidth and data exchange in peak time? We’ll make sure that you partner with a reliable hosting provider that can manage the entire hosting environment, especially if your applications receive robust traffic in uptime.
    4. Financial stability. 10 or more years of experience and stable position in the stock market (earnings and financial reports) are significant factors to consider.
    5. Easily managed. The cloud service provider should have an easy management system for the IT staff to manage, control and maintain with efficiency and simplicity.
    6. Customer-driven. It’s hard to work with a cloud vendor that is after profits and sales only. Vendor of choice should be customer-driven and providing 24/7 support whether in chat, call or email. Customer satisfaction reviews are a great source of information for our recommendations.
    7. Transparent. Aside from the transparent pricing plans, other things to consider are managed service level agreements (SLA), security and data policies and terms of service. The last thing you would want to do is compromise your clients’ private information and experience outage during peak seasons.
    8. Integration. Even if on premise working environment is widely used, but you see the potential that allows cloud to accelerate your business, we’ll find a cloud vendor that provides easy integration of current network resources into the cloud apps and servers.
    9. Openness. If you are not ready to move the entire infrastructure, we’ll make sure the cloud service provider of choice is open for flexible solutions.
    10. Network Ownership. The cloud vendor must have a robust, secured and resilient network to deliver reliable network connectivity through the cloud services with efficiency. The cloud vendor must be able to manage the unforeseen challenges of cloud services and take ownership of the overall infrastructure and not to blame some third party for insufficient bandwidth.

    Architecture and Road Map

    Having all above well documented and presented, we’ll deliver a final Solution Architecture and Road Map documents for your final approval. As necessary, we’ll conduct meetings with your IT, financials, security and other decision makers to present our plan. This is not necessarily to be one-time event, it probably will be an iterative process where we collect your feedback, adjust the solution or convince you not to adjust it, and present the next version, until the project gets a big green “go”.

    Proof of Concept, Pilot and Final Deployments

    The 1st active stage presented in the Road Map is always a Proof of Concept (POC) or Pilot deployment. We’ll take one not very big but important piece of your infrastructure and actually migrate it. To make the POC successful, we will:

    1. Identify, list and tabulate all significant generic and application-specific metrics, KPIs and their expected values before migration
    2. Make measurements all way from the beginning till the migration complete
    3. Compare measured values against expectations
    4. Document all issues, resolutions and lessons learned
    5. Cross-train your IT team at every stage
    6. Continue with further deployments according to the approved Road Map, until necessary

    At any point when your IT team is ready, we can step aside. But we will keep stand by and be 24/7 on-call for you as long as it takes.

    Recent Posts

    • How to shrink size of Linux EBS volume in AWS
    • Data Normalization
    • Renaming Primary Keys to Standard
    • INCLUDE Clause in Non-clustered Indexes
    • Using a DDL trigger to control and automate history of data changes

    Archives

    WebProfIT Consulting, Inc. 2016