Tag: Cloud Storage


Migrate Amazon S3 bucket to Azure blob Storage

Migrate AWS S3 buckets to Azure blob storage

With the latest version of AzCopy (version 10), you get a new feature which allows you to migrate Amazon S3 buckets to Azure blob storage. In this blog post, I will show you how you can copy objects, folders, and buckets from Amazon Web Services (AWS) S3 to Azure blob storage using the AzCopy command-line utility. This makes it easy to migrate S3 storage to Azure or create a simple backup of your AWS S3 bucket on Azure.

AzCopy will use the Put Block from URL API, which allows you to directly copy files from AWS directly to Azure. This means you will not use a lot of bandwidth from your computer. You can even copy large objects or buckets from S3 to Azure.

Configure access and authorize AzCopy with Azure and AWS

First, you will need to install AzCopy to your machine. After that, you will need to authorize AzCopy with Microsoft Azure and AWS. To authorize with AWS S3, you have to use an AWS access key and a secret access key.

AWS access key and secret access key, and then set these environment variables:

OSCommand
Windowsset AWS_ACCESS_KEY_ID=
set AWS_SECRET_ACCESS_KEY=
Linuxexport AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
macOSexport AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=

Copy an AWS S3 object to Azure blob

You can copy a simple object using the following command:

azcopy cp "https://s3.amazonaws.com/tomsbucket/tomsobject" "https://tomsstorageaccount.blob.core.windows.net/tomscontainer/tomsblob"

Copy and migrate Amazon S3 folder to Azure

You can copy a folder from the Amazon S3 bucket to the Azure blob storage:

azcopy cp "https://s3.amazonaws.com/tomsbucket/tomsfolder" "https://tomsstorageaccount.blob.core.windows.net/tomscontainer/tomsfolder" --recursive=true

Copy an Amazon S3 bucket to Azure blob storage

You can also copy one or multiple Amazon S3 buckets to Azure:

azcopy cp "https://s3.amazonaws.com/tomsbucket" "https://tomsstorageaccount.blob.core.windows.net/tomscontainer" --recursive=true

I hope this gives you a quick idea of how you can migrate data from Amazon AWS S3 storage to Azure using AzCopy. If you want to know more, check out the official Microsoft Docs about how to copy data from Amazon S3 buckets by using AzCopy.



Synchronize Folder with Azure Blob Storage using AzCopy

Sync Folder with Azure Blob Storage

With AzCopy v10 the team added a new function to sync folders with Azure Blob Storage. This is great if you have a local folder running on a server or even on a client device which you can to keep synchronized with Azure Blob storage. This will not only upload new or changed files, with the “–delete-destination” parameter you can let AzCopy remove locally deleted files on Azure blob storage and vice-versa.

First, make sure you install and set up AzCopy.

Sync Folder with Azure Blob Storage

You can use the following command to sync a local folder with Azure Blob Storage. This command will only sync changed and new files, it compares file names and last modified timestamps.

Sync Folder with Azure Blob Storage using AzCopy

 
azcopy sync "C:\Temp\images" "https://tomsaccount.blob.core.windows.net/images" --recursive

As mentioned, if you set the “–delete-destination” parameter to “true”, AzCopy deletes files without a prompt. If you want to check first, which files will be removed, before AzCopy deletes a file, set the –delete-destination flag to “prompt”.

To make sure you are not accidentally are deleting data, make sure to enable the soft delete feature before you use the –delete-destination parameter.

Synchronize Folder with Azure Blob Storage using AzCopy

I deleted the file “3.jpg” locally and I ran the azcopy sync again. You can see that file “3.jpg” was removed from the Azure Blob Storage.

Sync to a local folder

To sync Azure Blob Storage to a local folder, you can use the following command.

 
azcopy sync "https://tomsaccount.blob.core.windows.net/images" "C:\Temp\images" --recursive

As of today, the sync feature does only supports local folders with Azure Blobs. Syncing with AWS or from Storage account to Storage account is currently not supported.

I hope this gives you a quick overview of how you can sync folder with Azure Blob Storage, if you want to know more, check out the Microsoft Docs about how you can transfer data using AzCopy. If you have any questions, please let me know in the comments.



How to Install AzCopy

How to Install AzCopy for Azure Storage

AzCopy is a command-line tool to manage and copy blobs or files to or from a storage account. It also allows you to sync storage accounts and move files from Amazon S3 to Azure storage. In this blog post, I will cover how to install AzCopy on Windows, Linux, macOS, or in update the version in the Azure Cloud Shell.

AzCopy v10 is now generally available to all of our customers and provides higher throughput and more efficient data movement compared to the earlier version of AzCopy (v8). Version 10 also adds additional functionality like sync of blob storage accounts and much more.

Install AzCopy

You can get the latest version of AzCopy from here: Get started with AzCopy

Install AzCopy on Windows

To install AzCopy on Windows, you can run the following PowerShell script, or you can download the zip file and run it from where ever you want. This script will add the AzCopy folder location to your system path so that you can run the AzCopy command from anywhere.

 
#Download AzCopy
Invoke-WebRequest -Uri "https://aka.ms/downloadazcopy-v10-windows" -OutFile AzCopy.zip -UseBasicParsing
 
#Curl.exe option (Windows 10 Spring 2018 Update (or later))
curl.exe -L -o AzCopy.zip https://aka.ms/downloadazcopy-v10-windows
 
#Expand Archive
Expand-Archive ./AzCopy.zip ./AzCopy -Force
 
#Move AzCopy to the destination you want to store it
Get-ChildItem ./AzCopy/*/azcopy.exe | Move-Item -Destination "C:\Users\thmaure\AzCopy\AzCopy.exe"
 
#Add your AzCopy path to the Windows environment PATH (C:\Users\thmaure\AzCopy in this example), e.g., using PowerShell:
$userenv = [System.Environment]::GetEnvironmentVariable("Path", "User")
[System.Environment]::SetEnvironmentVariable("PATH", $userenv + ";C:\Users\thmaure\AzCopy", "User")

Install AzCopy on Linux

To install AzCopy on Linux, you can run the following shell script, or you can download the tar file and run it from where ever you want. This script will put the AzCopy executable into the /usr/bin folder so that you can run it from anywhere.

 
#Download AzCopy
wget https://aka.ms/downloadazcopy-v10-linux
 
#Expand Archive
tar -xvf downloadazcopy-v10-linux
 
#(Optional) Remove existing AzCopy version
sudo rm /usr/bin/azcopy
 
#Move AzCopy to the destination you want to store it
sudo cp ./azcopy_linux_amd64_*/azcopy /usr/bin/

Authorize with Azure Storage

When you start working with Azure Storage, you have two options to authorize against the Azure Storage. You can provide authorization credentials by using Azure Active Directory (AD), or by using a Shared Access Signature (SAS) token.

It also depends on which services you want to use.

Storage typeSupported method
Blob storageAzure AD and SAS
Blob storage (hierarchical namespace)Azure AD
File storageSAS only

Authenticate using Azure AD

To authenticate with AzCopy using Azure AD, you can use the following command

 
azcopy login

Authenticate using SAS token

To authenticate with AzCopy using a SAS token you can use this command as an example

 
azcopy cp "C:\local\path" "https://account.blob.core.windows.net/mycontainer1/?sv=2018-03-28&ss=bjqt&srt=sco&sp=rwddgcup&se=2019-05-01T05:01:17Z&st=2019-04-30T21:01:17Z&spr=https&sig=MGCXiyEzbtttkr3ewJIh2AR8KrghSy1DGM9ovN734bQF4%3D" --recursive=true

To make things easier you can use Azure PowerShell to generate the SAS token for you. I wrote a blog post on ITOPSTALK.com about how you can do that. You can get the SAS token using the following Azure PowerShell command. If you are running Linux or macOS, you can find on this blog post, how to install PowerShell 6.

 
Connect-AzAccount
Get-AzSubscription
 
$subscriptionId = "yourSubscriptionId"
$storageAccountRG = "demo-azcopy-rg"
$storageAccountName = "tomsaccount"
$storageContainerName = "images"
$localPath = "C:\temp\images"
 
Select-AzSubscription -SubscriptionId $SubscriptionId
 
$storageAccountKey = (Get-AzStorageAccountKey -ResourceGroupName $storageAccountRG -AccountName $storageAccountName).Value[0]
 
$destinationContext = New-AzStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
 
$containerSASURI = New-AzStorageContainerSASToken -Context $destinationContext -ExpiryTime(get-date).AddSeconds(3600) -FullUri -Name $storageContainerName -Permission rw
 
azcopy copy $localPath $containerSASURI --recursive

To learn more about SAS tokens, check out Using shared access signatures (SAS).

I hope this helps you to install AzCopy and configure it. If you have any questions, feel free to leave a comment.



StorSimple

Microsoft Announces Azure StorSimple Hybrid Storage Solutions For The Enterprise

Today Microsoft announced that starting August 1, the will deliver the new StorSimple 8000 series hybrid storage arrays. The StorSimple 8000 series are the most powerful StorSimple systems ever and have even tighter integration with Azure, including two new Azure-based capabilities to enable new use cases and centralize data management. These new solutions demonstrate how Microsoft is bringing the best of on-premises storage together with the cloud in order to deliver bottom line savings to customers by cutting storage costs from 40 to 60% and helping IT teams focus more on business strategies than infrastructure management.

The new StorSimple 8000 series arrays come in two flavors to meet a variety of capacity and performance needs:  the StorSimple 8100 and the StorSimple 8600, which you can read about here.

The new StorSimple 8000 series arrays come in two flavors to meet a variety of capacity and performance needs:  the StorSimple 8100 and the StorSimple 8600, which you can read about here.  These are enterprise hybrid storage arrays with a twist – instead of being limited to only SSDs and HDDs, these arrays use Azure Storage as a hybrid cloud tier for automatic capacity expansion and off-site data protection. That means IT teams don’t have to spend so much time and effort working on the next inevitable storage capacity upgrade or managing the complex details of data protection. Data stored on StorSimple 8000 series arrays is automatically protected off-site by cloud snapshots, which fill the enormous gap between problematic tape solutions and costly remote replication solutions.

To go with the new arrays, there is the Microsoft Azure StorSimple Virtual Appliance, which is an implementation of StorSimple technology running as an Azure virtual machine in the cloud. With a matching Azure StorSimple virtual machine, StorSimple 8000 series customers can run applications in Azure that access snapshot virtual volumes in the cloud. Customers will be able to run new applications that search and analyze historical datasets without disrupting production work in their datacenter. This new StorSimple Virtual Appliance not only works for data from Windows Server and Hyper-V, but on-premises Linux and VMware servers, as well, providing hybrid cloud capabilities for the most common server platforms today.

The Virtual Appliance also enables disaster recovery (DR) in the cloud. Virtualized applications that store their data on an Azure StorSimple array in a customer’s datacenter can be restarted in VMs in Azure with access to previously uploaded data. Updates to data made during recovery operations can be downloaded later to StorSimple arrays on-premises when normal operations resume.

DR is an area of concern for many customers and they seldom get a chance to test their abilities. Microsoft Azure StorSimple 8000 Series arrays and Virtual Appliances have a feature called Instant Recovery, which presents synthetic, full images of virtual volumes in Azure to applications and end users so they can start accessing data as soon as possible after a disaster. Instant recovery accelerates restores and DR testing by only downloading data that is needed and bypassing data that isn’t needed.

Another groundbreaking capability in this release is the Microsoft Azure StorSimple Manager, which consolidates management for all of a customer’s Azure StorSimple 8000 series arrays and Virtual Appliances. Administrators use the Manager to centrally control all aspects of StorSimple storage and data management from the cloud, so they can ensure consistent operations and data protection/retention policies across the enterprise. The new StorSimple Manager also gives administrators a dashboard with up-to-the-minute status and reports so they can quickly spot storage troubles and trends and allows the IT team to spend less time on storage infrastructure management and shift resources to business applications.

StorSimple customers have been seeing the financial and IT efficiency benefits of hybrid cloud storage for years.  Now, the Microsoft Azure StorSimple solution brings new innovations to enable even greater operational efficiency, and is a great example of technology developed with a hybrid cloud design point and critical customer needs in mind.

 

If you want to know more about StorSimple checkout my blog about StorSimple Cloud as a Tier and the Microsoft blog post from Takeshi Numoto about the new StorSimple 8000 series.

 



SMB Scale-Out File Server

Hyper-V over SMB: Scale-Out File Server and Storage Spaces

On some community pages my blog post started some discussions why you should use SMB 3.0 and why you should use Windows Server as a storage solution. Let me be clear here, you don’t need Windows Server as a storage to make use of the Hyper-V over SMB 3.0 scenario, you can use storage form vendors like NetApp or EMC as well. But in my opinion you can get a huge benefit by using Windows Server in different scenarios.

  • First you can use Windows Server together with Storage Spaces, which will offer you a really great enterprise and scalable storage solution for low cost.
  • Second you can use Windows Server to mask your existing Storage, by building a layer between the Hyper-V hosts and your storage. So you easily extend your storage even with other vendors.

At the moment there are not a lot of vendors out there which offer SMB 3.0 in there storage solution. EMC was one of the first supporting SMB 3.0 and with ONTAP 8.2 Netapp is now supporting SMB 3.0 as well. But if you want to build a SMB layer for a storage which does not support SMB 3.0. to mask your storage so you can mix it with different vendors or using it with Windows Server 2012 Storage Spaces, the solution would be the Scale-Out File Server cluster. Microsoft offers file server cluster for a while now, but since this was an active/passive cluster, this was not really a great solution of a Hyper-V storage environment (even if a lot of small iSCSI storage boxes are active/passive as well).

Basically what the Scale-Out File Server let you do it so cluster up to 10 file servers which all will share CSVs (Cluster Shared Volumes) like you know from Hyper-V hosts and present SMB shares which are created on the CSV volumes. And the great thing about that, every node can offers the same share this will be a active/active solution up to 8 nodes. Together with SMB Transparent Failover the Hyper-V host does not really get any storage downtime if on of the SOFS nodes fails.

SMB Scale-Out File Server

For the storage guys out there think about the cluster nodes as your storage controllers. Most of the time you will have 2 controllers for fail-over and a little bit of manual load balancing where one LUN is offered by controller 1 and the other LUN is offered by controller 2. With the Scale-Out File Server you don’t really have that problem since the SMB share is offered on all hosts at the same time and up to 8 “controllers”. With Windows Server 2012 one Hyper-V host connected to one of the SOFS nodes and used multiple paths to this node by using SMB Multichannel, the other Hyper-V host connected automatically to the second SOFS node so both nodes are active at the same time. In case on of the SOFS nodes dies, the Hyper-V host fails over to the other SOFS node without any downtime for the Hyper-V Virtual Machines.

In Windows Server 2012 R2, Microsoft worked really hard to make this scenario even better. In Windows Server 2012 R2 a Hyper-V host can be connected to multiple SOFS node at the same time. Which means that VM1 and VM2 running on the same Hyper-V hosts can be offered by two different SOFS nodes.

Advantages of the Scale-Out File Server

  • Mask your storage and use different vendors
  • Scale up to 8 nodes (controllers)
  • Active/Active configuration
  • Transparent Failover
  • Supporting features like SMB Multichannel and SMB Direct
  • Easy entry point with SMB shares
  • Easy configuration, Hyper-V host and Cluster objects need access on the shares
  • Same Windows Server Failover Cluster Technology with the same management tools

Storage Spaces

As already mentioned you can use your already existing storage appliance as storage for your Scale-Out File Server CSVs or you could use Windows Server Storage Spaces which allow you to build great storage solution for a lot less money. Again, the Scale-Out File Server Cluster and Windows Server Storage Spaces are two separate things you don’t need a SOFS cluster for Storage Spaces and you don’t need Storage Spaces for a SOFS cluster, but of course both solutions work absolutely great together.

Windows Server Storage Spaces vs Traditional Storage

Microsoft first released there Software Defined Storage solution called Storage Spaces in Windows Server 2012 and this allows you basically to build your own storage solution based on a simple JBOD hardware solution. Storage spaces is a really cost-effective storage solution which allows companies to save up to 75% of storage costs in compare to traditional SAN storage.  It allows you to pool disks connected via SAS  (in Windows 8 and Windows 8.1 USB works as well for home users) and create different Virtual Disks (not VHDs) on these Storage Pools. The Virtual Disks, also called Storage Spaces, can have different resiliency levels like Simple, Mirror or Parity and you can also create multiple disks on one storage pool and even use thing provisioning. This sounds a lot like a traditional storage appliance right? True, this is not something totally different, this is something storage vendors do for a long time. But of course you pay a lot of money for this blackbox the storage vendors offer you. With Windows Server Storage Spaces Microsoft allows you to build our “own storage” on commodity hardware which will save you a lot of money.

Storage Space

This is not only just an “usable solution” this solution comes with some high-end storage features, which make the Storage Spaces and Windows File Server a perfect storage at low cost.

  • Windows Server Storage Spaces let you use cheap hardware
  • Offers you different types of resiliency, like Simple (Stripe), Mirror or Parity (also 3-way Mirror and Parity)
  • Offers you thin-provisioning
  • Windows Server File Server allows you to share the Storage via SMB, iSCSI or NFS.
  • Read-Cache – Windows Server CSV Cache offers you Memory based Read-Cache (up to 80% in Windows Server 2012 R2)
  • Continuous availability – Storage Pools and Disks can be clustered with the Microsoft Failover Cluster so if one server goes down the virtual disks and file shares are still available.
  • SMB copy offload – Offloading copy actions to the storage.
  • Snapshots – Create Snapshots and  clone virtual disks on a storage pool.
  • Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)
  • Enhanced Rebuilding – Speed of rebuilding of failed disks is enhanced. (enhanced in 2012 R2)
  • Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)
  • Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)
  • Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)

You can get more information about Storage Spaces in Windows Server 2012 R2 in my blog post: What’s new in Windows Server 2012 R2 Storage Spaces

Combine Windows Server Storage Spaces and the Scale-Out File Server Cluster

As mentioned both of this techologies do not require each other, but if you combine them you get a really great solution. You can build your own storage based on Windows Server, which not only allows you to share storage via SMB 3,0 it also allows you to share storage via NFS or iSCSI.

Windows Server 2012 Storage Spaces and File Server

A lot of concerns I have heard, was about scale of Storage Spaces. But as I can see scale is absolutely no problem for Windows Server Storage Spaces.  First of all you can build up to 8 nodes in a single cluster which basically would mean you create a 8 node active/active solution. With SMB Multichannel you can use multiple NICs for example 10GbE, infiniband, or even faster network adapters. You can also make use of RDMA which brings latency down to a minimum.

Scale Windows Server Storage SpacesTo scale this even bigger you can go to way, you could setup a new Scale-Out File Server Cluster and create new file shares where virtual machines can be placed. Or you could extend the existing cluster with more servers and more shared SAS disks chassis which don’t have to be connected to the existing servers. This is possible because of  features like CSV Redirected mode hosts can access disks from other hosts even if they are not connected directly via SAS, instead the node is using the Ethernet connection between the hosts.

Scale Windows Server Storage Spaces 2

New features and enhancements in Windows Server 2012 R2 and System Center 2012 R2

With the 2012 R2 releases of Windows Server and System Center Microsoft made some great enhancements to Storage Spaces, Scale-Out File Server, SMB, Hyper-V and System Center. So if you have the chance to work with R2 make sure you check the following:

  • Flexible resiliency options – In Windows Server 2012 you could create a Mirror Spaces with a two-way or three-way mirror, a Parity Space with a single parity and a Simple Space with no data resiliency. New in R2 parity spaces can now be used in clustered pools and there is also a new dual parity option. (enhanced in 2012 R2)
  • Enhanced Rebuilding – Speed of rebuilding of failed disks is enhanced. (enhanced in 2012 R2)
  • Storage Tiering – Windows Server 2012 R2 allows you to use different kind of disks and automatically moves “hot-data” from SAS disks to fast SSD storage. (new in 2012 R2)
  • Write-Back Cache – This feature allows data to be written to SSD first and moves later to the slower SAS tier. (new in 2012 R2)
  • Data Deduplication – Data Deduplication was already included in Windows Server 2012 but it is enhanced in Windows Server 2012 R2, and allows you to use it together with Cluster Shared Volumes (CSV) and supports VDI virtual machines. (enhanced in 2012 R2)
  • Read-Cache – Windows Server CSV Cache offers you Memory based Read-Cache (up to 80% in Windows Server 2012 R2)
  • Management – Management of Hyper-V and Scale-Out File Servers as well as Storage Spaces right in System Center 2012 R2 Virtual Machine Manager.
  • Deployment – Deploy new Scale-Out File Server Clusters with and without Storage Spaces directly from System Center 2012 R2 Virtual Machine Manager via Bare-Metal Deployment.
  • Rebalancing of Scale-Out File Server clients – SMB client connections are tracked per file share (instead of per server), and clients are then redirected to the cluster node with the best access to the volume used by the file share. This improves efficiency by reducing redirection traffic between file server nodes.
  • Improved performance of SMB Direct (SMB over RDMA) – Improves performance for small I/O workloads by increasing efficiency when hosting workloads with small I/Os.
  • SMB event messages -SMB events now contain more detailed and helpful information. This makes troubleshooting easier and reduces the need to capture network traces or enable more detailed diagnostic event logging.
  • Shared VHDX files – Simplifies the creation of guest clusters by using shared VHDX files for shared storage inside the virtual machines.. This also masks the storage for customers if you are a service provider.
  • Hyper-V Live Migration over SMB – Enables you to perform a live migration of virtual machines by using SMB 3.0 as a transport. This allows you to take advantage of key SMB features, such as SMB Direct and SMB Multichannel, by providing high speed migration with low CPU utilization.
  • SMB bandwidth management – Enables you to configure SMB bandwidth limits to control different SMB traffic types. There are three SMB traffic types: default, live migration, and virtual machine.
  • Multiple SMB instances on a Scale-Out File Server – Provides an additional instance on each cluster node in Scale-Out File Servers specifically for CSV traffic. A default instance can handle incoming traffic from SMB clients that are accessing regular file shares, while another instance only handles inter-node CSV traffic.

(Source: TechNet: What’s New for SMB in Windows Server 2012 R2)

I hope I could help with this blog post to understand a little bit more about the Scale-Out File Server and Storage Spaces, and how you can create a great storage solution for your cloud Environment.

Btw the pictures and information are taken from people like Bryan Matthew (Microsoft), Jose Barreto (Microsoft) and Jeff Woolsey (Microsoft).

 

 



Cloud as a Tier with Microsoft StorSimple

Microsoft StorSimple

Some weeks ago an awesome packet arrived at our office in Bern and I finally have time to write about the content.

Several months ago Microsoft acquired a company called StorSimple but there was no real buzz around this. But for me this is a huge step and shows in which direction the whole Cloud and Datacenter future will go.

StorSimple is one of the vendors which produced Cloud integrated Storage (CiS). Basically StorSimple is a hardware storage appliance with multiple storage tiers such as SSD or SAS disks and now also the Cloud which means Windows Azure.

As mentioned the box contains SSD and SAS disk which can be attached via iSCSI to your environment. ISCSI is not really the prefer option for me but I expect Microsoft to implement SMB 3.0 in the StorSimple box.

Primary Storage with SSD speed – The StorSimple box can be used as a primary storage with SSD speed. The box is certified for Windows Server 2008 R2, Windows Server 2012 and VMware but at the moment the StorSimple is maybe not made for running virtual machines on it. I would say it’s a great solution to attach the storage to you SMB file server and use it to expand your storage to the cloud.

Automatic Archiving – Cold data can be moved to cheap storage in Windows Azure which can grow with your needs, while hot data is running on-premise on your SSD or SAS storage.

Backup & Restore – Files can be backed up and restored from the Cloud. Incremental, deduplicated snapshots reduce storage requirements by over 90%
while delivering instant snapshot and restore technology in minutes as opposed
to days. Cloud Snapshots offer offsite data protection via the cloud. It is now
simple and cost-effective to retain as many snapshots as you need – no more 30,
60 or 90 day limits.

Multi Location Disaster Recovery – If a disaster strikes and you lose your datacenter you can restore you StorSimple box to a new StorSimple box, on the same location or on another, directly from the Windows Azure Cloud.

Military-grade Security – All data stored in the cloud with StorSimple has military-grade encryption applied to it. The encryption key is never given to StorSimple or the cloud provider, ensuring complete data privacy to support compliance requirements as stringent as HIPAA.

Enterprise-class Storage
StorSimple solutions offer enterprise-class high-availability with fully redundant disk controllers, power supplies, network connections and no single point of failure. They also support non-disruptive software upgrades.

Application-optimized Storage and Data Protection – Application-optimized volumes for Windows file shares, SharePoint and VMware libraries. Full support for VSS application-consistent snapshots is provided.

If you want to know more about StorSimple check out the StorSimple website.