When vSphere 6.0 came out earlier this year, there was a lot of hubbub about one feature in particular, and rightfully so. VVols, or virtual volumes, are a way to virtualize storage arrays and have them dynamically move and configure alongside your virtual machines.
VVols don’t replace traditional virtual storage methods, so you can keep using your existing storage strategies and hardware along with VVols. Basically, no matter what kind of storage you’re using in the data center, vSphere treats it as a datastore logical object. Previously, each time you needed to configure a VM for performance or availability, you’d have to move it to a different datastore.
Read on to learn why virtualized storage is way cool, and for some reasons you might not want to dive in just yet.
As Director of Engineering and Operations at Green House Data, Mike Mazarakis has helped his share of companies migrate to the cloud. With 20 years of data center and networking experience, he's a self-described “pragmatist in IT” who has watched virtualization evolve into the concept of cloud we all know today.
Mike answered questions submitted by the public in a webcast last month. We interviewed him to get the answers to the most pressing cloud migration questions and help you plan your move to hosted IT. Look for more features in our cloud migration series in the coming weeks.
After the jump, learn how small businesses and enterprises differ in their approach to the cloud, read a walkthrough of one company's quest to move to the cloud while continuing to use existing IT assets, and see the three primary types of new cloud users—plus more!
As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
This blog looks at how MPLS works and how it helps data centers provide better network services.
Virtualization is a standard practice for IT shops around the world. However, as more data center operators look to consolidate and migrate to new virtualized environments, some legacy applications remain stumbling blocks on the way to a 100% virtualized infrastructure.
Legacy apps are tough nuts to crack: your users are accustomed to them, so they are highly efficient in business use, but they might clash with your more modern IT tools, they might no longer supported by the vendor, or the hardware underneath might be ready to kick the bucket.
“No worries,” I hear you say. “I can just virtualize the platform.”
That might work in most cases, but there are some legacy apps that either just won’t make the leap to virtualization or are too much trouble to virtualize to make it worthwhile. Here are the most common examples run into by our techs:
E-mail, as we noted in last week’s blog, remains critical to business functions, and Microsoft Exchange is the most widely used e-mail client in the world. Virtualizing Exchange servers on VMware can improve performance, allow you to consolidate various Exchange server roles, combine mailboxes, and increase flexibility of your Exchange infrastructure, so you can scale up or down as your e-mail loads demand.
You’ll end up with 5-10x less physical hardware and more responsive Exchange, plus you can design your environment for your current workload. No need to guess at your resource utilization 3-5 years down the road—just provision a few more VMs when the time comes.
While virtualization can increase performance (VMware claims a 16 core server with vSphere produced double the throughput as physical hardware), Exchange has its own set of requirements and demands, so take a look at these best practices before you start up the installer in your virtual environment.
So you want to jump into virtualization and take the open source route on your guest virtual machine operating system? Several of our customers have recently spun up Ubuntu VMs on top of VMware. Here are our tips for setting up and optimizing performance in a virtualized Ubuntu environment. These tips may also apply to other Linux distributions on top of VMware hypervisors.
Cloud computing is built on virtualization, a technology concept that allows multiple virtual machines to run on a single server. Although this means data centers can squeeze much more computing power out of each server, it also brings a set of additional security risks. Without insight into the other environments using the same server resources as your virtual machines, how can you protect your own data from malicious attacks on other tenants?
For some small businesses, the security risk associated with multitenant cloud is outweighed by the security gains of having the provider’s skilled information security specialists working on their environment, whereas they may have lacked a dedicated security staff in the past. However, other risks increase as virtual network tools and hypervisors present additional attack surfaces.
Our Infrastructure Consultants are here to facilitate the perfect cloud architecture for each customer. This post rounds up some of the most frequently asked questions they get about the gBlock Cloud, from security and encryption to licensing and customer support.
Docker is making waves. The company’s container technology, based on an old open source model, has raised them millions in just 18 months and is causing some to call virtual machines outdated and ineffective. But just what are containers? With VMware professing its support and intent to integrate Docker with VMware management tools, we interviewed CTO Cortney Thompson to get the lowdown on this hot cloud technology and how it compares to virtual machines.
Patching is necessary to keep servers secure from attackers and viruses as well as free from bugs, which can sap productivity. Designing your server and virtual machine infrastructure to suit service levels and future change management will save you time and potential outages when the time comes to patch—and when it does, these simple best practices will help smooth the process.