Creating a VMware Datastore on DataGravity storage
I have recently been evaluating and getting to know the initial storage offering from DataGravity. In short they offer a unique storage array that offers hybrid storage and storage analytics all in one simple and easy to use offering. As I work with the product I will probably write up a few blog posts on how to work with things. Expect a detailed review over at Data Center Zombie soon after the new year.
I’m finding the product to be very easy to work with and thought a simple walk through on how to create a new export that will be mounted as a VMware datastore would be helpful.
Upon logging into the management page for a DataGravity array you will see the following welcome screen. I will be creating some new storage, so I will click on the storage choice to proceed.
The storage choice displays a number of options once clicked on. These are the major functions for creating and managing storage on the array. Click on the Create Datastore to proceed with the task for this post.
This first step of creating the Mount Point that will be a datastore is to provide a name, capacity sizing and an option description.
This step is where you will grant access for the hosts in the clusters that will utilize this new datastore. The image shows that I have already added the first host and by clicking the blue plus button you can add the other hosts.
The following image shows the process for adding another host. You can enter the host name or IP address for enabling access.
The policy step is where you can select an existing Discovery Policy or create another. In short these policies govern how the data is analyzed and protected. Once ready, click the Create button at the bottom and it will then be ready to configured on the vCenter side.
Now that the mount point is ready I have selected one of my vSphere hosts and will add NFS storage to it. I have provided the IP for the data path to the storage array. The Folder is the same as the mount point name that we created earlier. And the datastore name can be what you like, I have made it the same as the mount point name.
Once all of the steps to create the mount point and it’s presented on the VMware side I have taken a look back in DataGravity to list the mount points on the array. From here you can see what was created along with details about capacity and protection policy.
The last view here is looking at our new mount point created. I have moved a few VMs onto the datastore and details about them have already started to appear. DataGravity is VM-aware so you have access to more data than a legacy array would show.
By now you have an idea on how easy it was to create and presented a new datastore. The other functions on DataGravity are also very easy to use.
About Brian Suhr
Brian is a VCDX5-DCV and a Sr. Tech Marketing Engineer at Nutanix and owner of this website. He is active in the VMware community and helps lead the Chicago VMUG group. Specializing in VDI and Cloud project designs. Awarded VMware vExpert status 6 years for 2016 - 2011. VCP3, VCP5, VCP5-Iaas, VCP-Cloud, VCAP-DTD, VCAP5-DCD, VCAP5-DCA, VCA-DT, VCP5-DT, Cisco UCS Design