Piszki Lab | EN

My case study in the clouds…

EMC MirrorView configuration on the EMC VNX arrays.

| 14 Comments

By building their own Disaster Recovery solutions often reach for solutions based on data replication between storage arrays. One such solution (let us add that the cheapest) is EMC MirrorView. It is a very simple and easy to set up service fully cooperating with VMware Site Recovery Manager (SRM). LUN replication can be done synchronously or asynchronously, in the framework of assimilation theory and terminology refer you to the StorageFreak blog where colleague Tomek exactly everything described. We will focus on MirrorView configuration directly on the VNX arrays, in my case are VNX 5200 and VNX 5300.

mi28

As part of preparations create connection through SAN between arrays. We combine ports described as MirroView, Port A-0 SPA in the first array to the port A-0 SPA on second array (and correspondingly SPB). Ports which will take place in replication can not be used in the hosts Storage Groups. If you are used these ports to communicate with the hosts, remove them from the Storage Group before connecting arrays (otherwise awaits us restart SP controllers and a lot of nasty messages).

mi27

After the storage connected, verify if seen correctly, go to the section Hosts -> Initiators.

VNX 5200:  mi1

VNX 5300: mi2

As you can see, the connection is set up correctly. To be able to perform Mirror operations, both arrays must know about yourself, be in the same domain or in two different Domains (local and remote).

mi3

This operation is carried out with a newer storage or higher-numbered firmware, in my case from the VNX 5200 I add VNX 5300 (the other way it will not work).

mi4

At this point, I have in VNX 5200 two domains, Local and Remote, for VNX 5300 is only the Local domain.

mi5

From the VNX 5200 can be managed simultaneously by both arrays seamlessly switching between them at the Unisphere client level.

mi8

Next, if you have not already have, we will create LUN for “write intent logs’. This log will help in reversing the array of problems that might occur during replication (something like transaction log). Sam LUN does not have to be big, the minimum requirement is 2GB, but we can not create it as part of Pool, this must be a RAID group. Additionally, these logs must be two, one for each SP. Under Storage-> Storage Configurations-> RAID Groups create two new groups and create new LUN.

mi20

Now under the Data Protection click on the “Configure Mirror Write Intent Log” and add our LUN. Write Intent Log is not necessary for replication, if you do not have spare disks from which we could create RAID group we can skip this step (its existence, however, increases safety).

mi21

Then we create a Reserved LUN Pool, RLP is used in the snapshots and to present the VMFS to ESXi during testing SRM. They are also necessary for asynchronous replication. Same LUN does not have to be big (this is dependent on the amount of changes in production volumes which to postpone between successive copy steps in asynchronous copy). I created three 512GB LUNs ( can not be Thin). Add them in the Data Protection-> Reserved LUN Pool.

mi14

Using VMware SRM can make switching in both directions, so a similar set of LUNs create for the second storage.

mi15

Now we move to set up replicas, create new LUN (or choose one) and from the menu choose “Create Remote Mirror”.

mi16

Depending on the distance, select whether it be a copy of synchronous (delay of no more than 10ms) or asynchronous (delay of no more than 200ms).

mi18

And so forth for each LUN. Now we go to the remote array and proceed to configure (create a LUN). After this operation, we return to the first array and check the LUN Mirrors if everything is ok (Active).

mi22

Select the LUN and click “Add Secondary”, previously prepared LUN on the remote array must be the same size as the source and can not be assigned to any Storage Groups.

mi23

At this point, we have defined mirror image of our volume (enable synchronization).

mi24

If you have more volumes that are subject to synchronization and additionally, these volumes will act as a single vSphere DRS cluster, you might want to combine these into one Mirror Consistency Group.

mi25

This ensures that all synchronized operations will be carried out simultaneously on all LUNs.

mi26

In addition, Consistency Groups translates directly into VMware SRM Protection Group. At this stage, the configuration MirrorView has been completed, the case described herein relates to replicate in one direction. It is also possible replication in both parties (Bi-Directional), the configuration is very similar. Of course, in the case of Bi-Directional talking about the replication of two different LUN sets of each array of one replicated to the second array (we have then the two active DC replicated to the other sites).

Rate this article:
[Total: 2    Average: 5/5]

Author: Piotr Pisz

Computer always, since I got a Commodore 64 at the end of primary school, through his beloved Amiga and Linux infinite number of consoles, until today, fully virtual day. Since 2001, Unix/Linux Systems Administrator, for seven years a faithful companion and protector of Solaris system, until his sad end. In the year 2011 came in the depths of virtualization, then smoothly ascended into the clouds and continues there today. Professionally working as Systems Architect in the Polish Security Printing Works.

14 Comments

  1. Hi Piotrs

    Before all my best compliments to your articles because their are really clear! I just read the vvnx article and I would know if is possibile to do the same with vvnx that is replication between primary and secondary over wan combined to site recovery manager.

    • Hi marco,

      Thank you for the compliment, very nice to read it :)
      This is a very good question, VNXe and vVNX have built-in replication, for VNXe there are SRA adapters for SRM 5.8 and 6.0. It should work, but I think that vVNX + SRM configuration is not supported. I decided that in the free time I check this configuration :-)

      Regards,
      Piotr

      • Piotr

        thanks for the quick and kind reply.
        I still have a couple of questions:

        1) vVnx replication
        Ok, I’ll wait your check (and I thank you for that) but…if I have correctly understood, your doubt is that there isn’t SRA adapter for vVnx. Is that correct ?

        2) Storage replication and physical equipment
        I know that this question is not strictly related to this article but, because I didn’t find good informations, I’ll try to ask you:
        could you briefly explain what kind of physical equipment is usually used (in enterprise environment) to replicate storage between sites?
        I mean:
        vnxstorage_sitea——-?——–layer2(fiber?)——–?———-vnxstorage_siteb

        a) About the layer2, is the fiber the only option?
        b) Is there some kind of switch or special device in place of the question mark?

        I read something about “dark fiber” or DWDM but I’m a bit confused and I didn’t find a decent schema or example about physical devices (brocade???)

        3) EMC Mirror view alternatives
        You wrote: let us add that the cheapest…
        Could you just mention some alternatives?

        I’m really interested in these topic :)

        • Hi,
          Exactly, no SRA adapter for vVNX (but vVNX is virtualized VNXe).
          In EMC MirrorView all you need is Fabric license in Brocade FC (distance less than 16km) or Extended Fabric (distance greater than 16km) between data centers. In my company We have DWDM ring between all ours localizations (dark fiber is long distance FC). So my schema is VNX->Brocade FC->DWDM->Brocade FC->VNX. If you have, for example, two DC (distance 2km) with direct FC connection, you only need Fabric and MirrorView license to run replication (two storage and four switches). In VNX block storage fiber is only option, in file (VNX Unified aka Celerra) replication run through ethernet. With vVNX/VNXe replications run only over ethernet (infrastructure is therefore very simple). We are talking only about VNX or in general? MirrorView is “software” replication, next you have RecoverPoint and VPLEX. Good idea is to talk with EMC (or others) representative, they send you engineer to talk about what you need :-)
          Regrds,
          Piotr

          • Thanks again for kind and excellent reply!
            I can’t wait your post about vvnx replication test :)

            bye
            Marco
            Italy

  2. Piotr

    You have written that the LUN for write intent log should have at least 128GB. Correct me if I am wrong but the size shouldn’t be 128MB?

    Best regards,

    Wieslaw

    • Hi Wieslaw,

      Hmm, 128mb is the minimum size with older levels of flare and for CX. Recommended size for VNX is 2GB, but there is really no restrictions in this matter.
      I correct my guide, 128GB is not a minimum :-)

      Best regards,
      Piotr

      • Hi Piotr,

        Do you have any viable source to confirm the information that the recommended size for VNX is 2GB ( question: the 2GB refers to one SP or 1GB per one SP?).
        I checked last available docs and guides from EMC knowledge base and couldn’t find recommended size at all. All docs say about the minimum size, event navicli reference guide :)

        Best regards,

        Wieslaw

        • Hi Wieslaw,

          I spoke with a friend from EMC, the recommendation includes one lun 2GB per SP. At the same time he sent me a mail with documentation referred to 128MB ;-)
          Generally the size of the log does not affect performance, also WIL is not required for asynchronous copy.

          Best regards,
          Piotr

        • Hey,

          I was recently training with VNX, and there I learned interesting things. Creating WIL on raid group is not necessary, the log will automatically create in the RAM os SP. This log is a bit map, one bit to one block. Therefore 128mb means to map multiple terabytes, it means that if someone assumes WIL on raid group a 128mb should be entirely sufficient.

          Piotr

          • Hey Piotr,

            You are right. The WIL is not necessary, but the best practises of mirrorview configuration are recommended to implement WIL. It is the next level of security in the case of SP failure. The log which resides in the RAM of SPs calls the fracture log and based on WIL can be reconstructed after an outage.
            The log is a bitmap, that represents areas in the primary image which called extents. It is the same logical notion that is used in database technology. An extent is a specific number of contiguous data blocks. I have allocated 128MB on the raid group, and it’s still working :) Thanks Piotr.

            Wieslaw

  3. Is SRM adapter (SRA) for mirrorview (pref. async) supported when replication is done over IP? I found that only FC was “certified” but I need confirmation.

  4. Hello Piotr,

    Thanks for this post. I have to configure mirrorview for VNX5200 and VNX5100, and it is my first time. Is the same procedure? The master will be the 5200.

    Thanks.

Leave a Reply

Required fields are marked *.


.

Enjoyed the post? Support Piszki Lab | EN, click on the AD! :-)

.