Jump to content
Sign in to follow this  
rev.dennis

How do you migrate from F5 Appliance to VIPRION vCMP Guest

Recommended Posts

Trying to migrate our F5 BIG-IP 8900 series to a vCMP guest on our new VIPRION 2400 Series.  Would love to have steps to accomplish this.

Environment looks like this:

DualSiteViprions.png

So I'm assuming I'm to get mirroring going first in each location (Chassis1 with Chassis2) but any suggestions on steps to do this would be great.

How do we add a vCMP guest from an existing 8900 series appliance (utilize a UCS file?)

Next how do we make sure that the vCMP guests in Location A sync to the Viprion Chassis in Location B

 

Share this post


Link to post
Share on other sites

Most likely you are only after objects in the bigip.conf. 

You can extract big.conf file and move it to /config on the vCMP guest.

Load it with:

tmsh load sys config

Here are a few issues you may have:

1) Route Domain IP addresses (route domain not on VCMP guest).2) VLAN enabled on a config object (VLAN doesn't exist on the VCMP guest).3) Partition enabled on a config object (Partition doesn't exist on VCMP guest).3) Missing SSL keys (these are easy to get from the UCS).4) Missing External Monitor (these are easy to get from the UCS).

My Experience migrating, I was able to migrate the config to vCMP guest using an old UCS file.. only thing is I have to modify the bigip_base.conf file.

Basically, I removed physical interfaces from the old UCS file (for obvious reasons, the interfaces aren't present on the vCMP guest)

 

Share this post


Link to post
Share on other sites
Guest

Viprion Setup

Install Viprion Chassis in equipment rack
Install Viprion blades
Power on Viprion Chassis
Configure management interface for Primary Blade
        Option1:  Connect to console port of blade 1 with telnet client using the following settings
        Bits per second [baud]    19200
        Data bits                        8
        Parity                            None
        Stop bit                            1
        Flow control                    None
        
Login with default account
Username: root
Password: default
Run following command to configure management interface
config
    
Connect management interface to network
Connect to management UI with web browser
Login with default UI account
        Username: admin
        Password: admin
Setup Utility should be running after login.  Go through setup process to activate license and provision Viprion as vCMP
Restart device then log back in to primary blade management interface (if required)
Configure management IPs for all blades
Connect remaining blade mgmt interfaces to network
Create vCMP guest(s)
        Navigate to vCMP > Guest List, and click the create button.
        Name guestX
        Host Name: guestX-A.thezah.com 
        Cores Per Guest: <# as required> (The following Core counts are valid with b2250 blade - 1, 2, 4, 8, 10, 20) (if you are replacing a 8950 its suggested 8 cores)
        Management Network: Bridged
        Management Port: 

 

Configuration Prep

Generate UCS backup on 8950 being migrated to Viprion

SSH to 8950 (Login with account having Advanced Shell or TMSH access)

tmsh save /sys ucs /var/tmp/$HOSTNAME"_"$(date +%Y%m%d).ucs

Collect Master Key from 8950 in order to load UCS on a different device. 

The Master Key is shared between HA Device Group members so you will only need the Master Key from one HA Device Group member

f5mku -K > /var/tmp/<devicename_migration_key>

Verify key is saved to file

cat /var/tmp/<devicename_migration_key>

 (should see something like this - hEAcQkFlkJp7nw1o4WYdeZ==)

 

Modify UCS configuration files to work with Viprion

(this can be done on source LTM, destination Viprion or another linux station)

Here is command to copy to another destination

scp /var/tmp/devicename_migration to destination

CD to destination/working folder

cd /var/tmp

Create temp folder to work on UCS files

mkdir expanded

Enter 'expanded' folder

cd expanded

Extract UCS file

tar -xvf ../<devicename_migration.ucs>

 

Modify configuration so it will start 'ForcedOffline' during migration

Force Offline standby device before creating UCS for migration

vim config/BigDB.dat   
Quote

 

[Failover.ForceOffline]

default=disable

type=enum

realm=local

enum=|enable|disable|

scf_config=false

display_name=Failover.ForceOffline

value=enable

 

        

Now time to remove interfaces, trunks, tunnels, etc..

vim config/bigip_base.conf

search for interface

delete the following lines

net stp /Common/cist {

    interfaces {

        1.1 {

            external-path-cost 20000

            internal-path-cost 20000

        }

    }

    trunks {

        Bonded.Pair {

            external-path-cost 20000

            internal-path-cost 20000

        }

    }

    vlans {

        /Common/Sync-Failover

        /Common/VLAN233

        /Common/VLAN235

        /Common/VLAN236

    }

}

net trunk Bonded.Pair {

    interfaces {

        1.2

        1.3

    }

    lacp enabled

 }

net vlan /Common/Sync-Failover {

    description Sync-Failover

    interfaces {

        1.1 { }

    }

    tag 950

}

net vlan /Common/VLAN233 {

    description "New Interconnect VLAN"

    interfaces {

        Bonded.Pair {

            tagged

        }

    }

    tag 233

}

net vlan /Common/VLAN235 {

    description VLAN235

    interfaces {

        Bonded.Pair {

            tagged

        }

    }

    tag 235

}

net vlan /Common/VLAN236 {

    description VLAN235   

    interfaces {

        Bonded.Pair {

            tagged

        }

    }

    tag 236

}    

 

search for management

change IP to new address and update default gateway  

 

search for last octet of management ip (example .104) and update with correct IP Address.

NOTE: If part of a device-group search for last octet of other appliance and update to new IP Address.

 

Search for Sync-Failover and update with correct VLAN name (example: Sync-Failover-DEV)

 

Search for vlan and remove any not used VLAN’s (possibly put command to validate not used)

 

Search for tunnels and remove any not used. (possibly put command to validate not used)

Save file

 

Tar files into new UCS

tar -czvf ../<devicename_migration_mod.ucs> *

Copy modified UCS and Master Key file to vCMP guest on Viprion

scp <devicename_migration_mod.ucs> root@vCMPGuestIP:/var/tmp

   

On Viprion Host

Make sure all VLANs exist that are in bigip_base.conf which includes

Sync-Failover-DEV

Interconnect

Etc…

Ensure all VLANs are assigned to appropriate vCMP Guest so no errors when you load the UCS file on guest.

 

vCMP Guest_a

SSH to vCMP guest

Login with account having Advanced Shell or TMSH access

CD to folder with modified UCS and Master Key file

Verify existing Master Key

f5mku -K

Rekey guest

f5mku -r `cat <devicename_migration_key>

Verify Master key has been changed

f5mku -K

Load modified UCS file keeping existing license and skipping hardware platform check

tmsh load sys ucs <devicename_migration_mod.ucs> no-license no-platform-check

    

<<check for config load errors>>

tail –f /var/log/ltm

 

Here are some helpful commands used during migration to troubleshoot

List Self IP’s

tmsh list net self | less

 

Show virtual address

tmsh list ltm virtual-address

 

Force Offline

tmsh run sys failover offline

 

Show Interfaces

tmsh show net interface

 

Disable interfaces (not mgmt or sync-failover)

modify net interface 1.2 1.3 disabled

 

Save Config

tmsh save sys config

 

tmsh list sys management-route

 

tmsh list net vlan | grep “net vlan”

 

Create VLAN

tmsh create net vlan Interconnect interfaces add { 1/1.4 } tag 500tmsh create net vlan Sync-Failover-DEV interfaces add { 1/1.4 { tagged }} tag 91

 

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...