Vagrant up

Some notes about Vagrant.

Vagrant is an open source tool for automating the build of the virtual machines. It’s not a virtualization software but a wrapper around most of the virtualization software.
It’s a good tool for getting your initial hands on DevOps concepts. It’s a good tool for creating test and development environment and help understand how you would deploy your production environment.

As simple workflow of setting up your Test & Development environment will be:

  1. Download and Install latest VirtualBox binaries; especially since it is freely available for every major OS platform. Also it is the default and built-in virtualization provider for Vagrant.
  2. Download and Install the last binaries for Vagrant.
  3. Define your needs and understand which OS/App you need.
  4. Either download a packaged VM with the application or make your own.
  5. Setup Vagrantfile. (This is the config file that contains all your Test and Development setup structure)
  6. Define basic settings like IP and Network type.
  7. Bring up the VM by “vagrant up” with application.
  8. Interact with OS/App and change as per your needs.
  9. Run intensive testing
  10. Finally clean everything by issuing “vagrant destroy” command.

So lets go step by step and follow our workflow.

You can download VirtualBox from here:

https://www.virtualbox.org/wiki/Downloads

Installing Virtualbox is usually very simple even if you run through default options.

After that you need to download and install Vagrant from here:

https://www.vagrantup.com/downloads.html

Again installing it will work just fine with default options.

The next step after this is to find out what application you want to test and find a related OS to it. Once you know that, you can search through Vagrant database to see if some one has already built a template box for it.

The links to search those databases are below:

http://www.vagrantbox.es/
https://atlas.hashicorp.com/boxes/search

Box is a base virtual machine just like images in Dockers. It has basic OS with generic files/libraries already installed for you to instantly start using it for your own test environment.

To save you time, you may find that a lot of users have already gone through that testing cycles and built base boxes for it with empty configurations. However if you wish you may well build your own box too using the steps described later. And may be its worth sharing your box with the rest of the world. (You can sign up to ATLAS here https://atlas.hashicorp.com/ to upload your boxes to share it)

If you have found a useable box, I suggest you download the box and add it to your repository for later reference and use.

You can do that by issuing the following command.

$ vagrant box add hashicorp/precise32

or

$ vagrant box add precise64 http://files.vagrantup.com/precise64.box

The first one will download the box named “hashicorp/precise32” with Ubuntu base image with VirtualBox Guest Additions.  VirtualBox Guest Additions are additional kernel drivers and configuration so that the virtual machine can take advantage of VirtualBox features such as shared folders, improved networking performance, and more.

Make sure you note down the name of the Box (highlighted in Bold) for you to refer it later. Once download you can see your local database of boxes by issuing the below command.

$ vagrant box list

After you see your box listed issue the a simple “vagrant init” command: append it with the name of box for instance. “vagrant init ubuntu/trusty64

This will generate a simple file name “Vagrantfile” at the present working location.

You can go through it to familiarize yourself. The contents are written Ruby language format but quite understandable. There is a lot to the Vagrantfile contents which may not be covered here all but you can follow through the Vagrant documentation to change/fine-tune them.

Once you are happy with Vagrantfile issue the command “Vagrant up”; This will bring up your VM and show a lot of lines with relevant information. Some information to note is like Network adapter, VM name, SSH setup etc.

After the process is finished you can ssh into the VM by issuing “vagrant ssh” command. You may be surprised that we haven’t setup anything regarding SSH but you can still SSH to the box. The reason for that is due the fact that whenever Vagrant brings up a VM it sets up the SSH by default and redirects localhost IP and port to VM’s eth0 IP and SSH port “22”. The very first redirected port is usually 2222 and the IP of the VM’s eth0 is from the NAT adapter range in Virtualbox. All consecutive ports are forwarded from 2200 and onward.

Since you now have a working VM you can install all your software or apps you wish and do all your testing. And once you are done playing with it, you can simply exit the VM and either do one of these.

1) Put the VM on sleep by powering off “vagrant halt” or put it in suspended mode by “vagrant suspend”. You would ideally do this to work on the VM later.

2) However if you are done with your testing you can simply destroy that VM by issuing “vagrant destroy” command and Vagrant will delete all its related files on your machine.

Manually adding/creating box image for Vagrant

If you need to build your own box there are two ways to it:

1) To find a base box and bring it up in Vagrant then install relevant configurations & software and then finally package it.

2) To setup the box by clean install through OS’s .ISO file.

I used the first option to setup my Ansible Host Server for Configuration Management in my Lab environment.

I simply download the box and stored it in my local folder and added it to my repository as mentioned above.

Then is used the following steps to setup my Ansible Host along with CumulusLinux modules. (Since I will be testing Ansible with CumulusLinux I will be setting up the host with its standard modules for plugins)

  1. vagrant init ubuntu/trusty64 (Go to the folder where you want to store the box and issue this command. This will initiate your Vagrant environment by setting-up a basic Vagrantfile in the same folder.
  2. After that simply bring up the VM by issuing “vagrant up” command. You will see some output showing logs while Vagrant brings your VM up and configures SSH to it.)
    1. one thing that I recommend you to do here is to note down the name of the VM at  the line stating “==> default: Setting the name of the VM: “. This is your default name given by vagrant to your VM in Virtualbox. This will be used later for packaging the box.
  3. After that SSH into the box by giving “vagrant ssh” and install the below software.
    1. apt-get install software-properties-common
    2. apt-get install linux-headers-generic build-essential dkms
    3. apt-add-repository ppa:ansible/ansible
    4. apt-get update
    5. apt-get install ansible
    6. apt-get install git
    7. mkdir cumulus && cd cumulus
    8. git clone https://github.com/CumulusNetworks/cumulus-linux-ansible-modules.git
    9. cd .. && ansible-galaxy install cumulus.CumulusLinux

Once you are done installing above exit the shell from the VM and go back to your Host shell where you are running Vagrant. Now since you have installed all your basic software and modules in your base box, it is good time to package it as a box for your later use. To do that you can you can issue the below command. The name after the “—base” argument is the same that we noted down at 2.1 step. Yours may be a bit different.

  1. vagrant package –base Sessions_default_1445775590766_98082 –output ansible-1.9.4-w_cumulus.box

So now you have your box that you can refer in your vagrant file and now whenever you will bring this up box up it will have all your desired modules and software installed. To do so you need to one last step i.e. to add the box in your repository, which you can do by issuing the following commands.

  1. vagrant box add –name ansible-1.9.4-w_cumulus ansible-1.9.4-w_cumulus.box
  2. For the second option you can find details on this link: https://docs.vagrantup.com/v2/virtualbox/boxes.html

One of the beauty of Vagrant is that it has a plug-in architecture. You can add functionality by adding plug-ins to it. One of the very commonly used purpose that can be demonstrated is the use of adding additional Virtualization Providers apart from the default Virtualbox provider.

For instance you can add AWS provider by issuing the following command:

vagrant plugin install vagrant-aws” This will install AWS and one of the advantage that it can give to you is that you be selective in which machines to run locally on your laptop and which you may want to run in AWS. Or simply you can say “vagrant up -provider=aws” to bring up the machines in your Vagrantfile that are under the block of AWS.

There are other providers supported by Vagrant as well; for instance VMware, Docker and Hyper-v.

Vagrantfile

As stated above, Vagrantfile is the file containing your settings/configuration of your VM or project. When you run “vagrant init” command it creates a Vagrantfile with a lot of setting and most of them commented out. Some of the setting may include Network, Provider, Share folder related settings.

The beginning of the file always starts with main config block:

Vagrant.configure(2) do |config|
  # ...
end

Everything that will define your project will reside within this block. For instance if you want to define a VM it could look something like this.

Vagrant.configure(2) do |config|

  config.vm.define “ansible” do |ansible|
    ansible.vm.box = “ansible-1.9.4-w_cumulus”
    ansible.vm.hostname = “Provisioner”

  end

end

Be very careful with the blocks and indentation. Above config.vm block is defining a new VM called “ansible” and it is referring to a box that we created before “ansible-1..4-w_cumulus” and setting a hostname of the VM.

Like wise I can define another VM for instance “spine111”, where its setting would look like:

  config.vm.define “spine111” do |spine111|
    spine111.vm.box = “cumulus-vx-2.5.3”
    spine111.vm.hostname = “Spine-111”
   
    # Internal network for switchport interfaces.
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L131P1P1”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L132P2P1”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L141P3P3”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L142P4P3”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111S112P5P5_CLAG”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111S112P6P6_CLAG”
  end

Again we have to be careful with the indent. In this block I have also added network adapters other than the eth0 (which is the default created by Vagrant), which are connected to their own private network space defined in “virtualbox__intnet:” argument. Apart form this I could also use “type: “dhcp”” to set that it will receive its IP using DHCP or I could simple set the IP by “ip: “192.168.1.1”” argument.

MultiMachines

You can ofcourse use Vagrant to deploy multiple VMs: as you have seen above, I have define two machine (1) ansible (2) spine111 in the vagrant file. If I would put both config blocks in one Vagrantfile then the whole file will look something like this.

Vagrant.configure(2) do |config|

  config.vm.define “ansible” do |ansible|
    ansible.vm.box = “ansible-1.9.4-w_cumulus”
    ansible.vm.hostname = “Provisioner”

  end

  config.vm.define “spine111” do |spine111|
    spine111.vm.box = “cumulus-vx-2.5.3”
    spine111.vm.hostname = “Spine-111”
   
    # Internal network for switchport interfaces.
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L131P1P1”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L132P2P1”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L141P3P3”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111L142P4P3”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111S112P5P5_CLAG”
    spine111.vm.network “private_network”, virtualbox__intnet: “S111S112P6P6_CLAG”

  end

end

Provisioners

Finally it wouldn’t be nice to leave this file without referring some provisioning scripts, file or tools to allow developers to automatically provision the VM with desired configuration.

To do that Vagrant provides you multiple options, where you can:

  1. run inline commands of the shell  e.g. “config.vm.provision “shell”, inline: “apt-get update”
  2. run a script e.g. “config.vm.provision “shell”, path: “test_script.sh”
  3. upload a file from host to the guest VM by “config.vm.provision “file”, source: “./hosts”, destination: “/etc/hosts”
  4. or lastly use provisioning tools such as Ansible by defining a block of the same tool in the Vagrantfile.

 

Links to read further:

Vagrant Documentation

That’s about it for now!

I hope this information has been useful and please feel free to comment and correct anything I may have misrepresented.

Thanks for passing by.

Advertisements

Cumulus Networks Open Networking

Recently I had a chance to try out some open networking and got my hands on to Cumulus Networks’ VX appliance. Cumulus Network offers a Linux based platform for its networking operating system which can be installed on open switches. They have also started offering a virtual appliance for those who want to try out their platform for testing/demo purposes. Hence this post I’m writing is describing a basic setup of Leaf-spine topology running OSPF.

Figure 1

Cumulus Networks Open Networking OSPF Leaf-Spine

My intention is to extend this setup and integrate with some other platform to try out a mixture of technologies. But that at a later stage.

In my opinion, networking as a whole is experiencing a great shift with Open Networking, SDN and NFV. And it’s a great opportunity out there for all of us to get out of the traditional mind set of networking and try out new methods of doing networking. You may not see it as direct benefit but I believe in long terms it’s going to improve your perception of how networking is done.

So let’s talk about this post. In this post I am going to setup a basic Leaf-spine topology with Cumulus VX appliance on VMware workstation and run OSPF on top of it.

Please note that this post assumes that you have basic knowledge of Linux and its utilities, even though I will be writing each and every command. You may find it helpful to know about it.

The image is available to download from here.

https://cumulusnetworks.com/cumulus-vx/

After you have downloaded the image for VMware workstation you need to setup your VMs by cloning multiple times. In my case I had 4 VMs connected to as per below through different VMnets in VMware workstation.

The appliance configuration comes with 8 interfaces by default. Keep the first network adapter in NAT mode for you to SSH into the box later. Rest of the interfaces can be hooked together as per topology. My setup was as per below table.

Table 1

  LEAF-1 LEAF-2
SPINE-1 VmNet-1 VmNet-2
SPINE-2 VmNet-3 Vmnet4

Once you are done with your setting you can turn on your VMs. Login to your VMs with the following default username and password.

Username = cumulus

password = CumulusLinux!

Note down the IPs assigned to eth0 of Linux which is Network Adapter 1 in Wokstation and SSH into them using your favorite terminal. In my case I got the following IPs;

Spine-1 = 192.168.47.132

Spine-2 = 192.168.47.133

Leaf-1 = 192.168.47.134

Leaf-2 = 192.168.47.135

Before we start playing, lets setup few basic parameters like Hostname and Time.

To change the hostname, modify the /etc/hostname and /etc/hosts files with the desired hostname and reboot the switch.

Use the following commands:

sudo vi /etc/hostname

sudo vi /etc/hosts

To change the time zone use the following command:

sudo dpkg-reconfigure tzdata

Once you are done with the above you have to setup the interfaces. By default the appliance will only enable network on eth0 i.e. network adapter 1.

For the rest of the interface you have edit the ‘interfaces’ file at /etc/network/interfaces.

In my case I had setup the following by issuing ‘sudo vi /etc/network/interfaces’ command on each switch:

This will open a text editor where you have to edit the file by reaching out to the location with arrows and editing it by pressing the ‘i’ key. Once you have written down the text in the end of the file. Press ‘Esc’ and write ‘:wq!’ to save and close.

On Spine-1

# This is swp1

auto eth1

iface swp1 inet static

address 10.1.1.1

netmask 255.255.255.0

# This is swp2

auto eth2

iface swp2 inet static

address 10.1.2.1

netmask 255.255.255.0

On Spine-2

# This is swp1

auto eth1

iface swp1 inet static

address 10.2.1.1

netmask 255.255.255.0

# This is swp2

auto eth2

iface swp2 inet static

address 10.2.2.1

netmask 255.255.255.0

On Leaf-1

# This is swp1

auto eth1

iface swp1 inet static

address 10.1.1.2

netmask 255.255.255.0

# This is swp2

auto eth2

iface swp2 inet static

address 10.2.1.2

netmask 255.255.255.0

On Leaf-2

# This is swp1

auto eth1

iface swp1 inet static

address 10.1.2.2

netmask 255.255.255.0

# This is swp2

auto eth2

iface swp2 inet static

address 10.2.2.2

netmask 255.255.255.0

Once you are done with the above configuration you have to bring up the interfaces by running the following commands on all the switches:

sudo ifup swp1 -v

sudo ifup swp2 -v

Verify your configuration by issuing ‘ifconfig’ command. You should see ‘eth0’, ‘lo0’, ‘swp1’ and ‘swp2’ interfaces.
Test your connectivity with on your point to point interfaces by issuing the ping command.

If successful you can move forward with configuring basic routing. To simulate a layer 3 DC design on Leaf and Spine switches.

Cumulus Linux uses Quagga open source platform to run routing protocols. It can support OSPF, RIP, OSPFv3, BGP etc. in this post I will be using OSPF only.

To enable OSPF routing protocol you have edit the Quagga Daemons file by issuing the following command:

sudo vi /etc/quagga/daemons

Change the ‘yes’ in front of zebra and ospfd to ‘yes’. You have to enable Zebra because this enables the kernel routing. Quagga as a routing platform is very scalable and worth describing how this is not my intention right now to do here. As we need to move on with our setup. If you are interested to learn more about it you can look it up here:

http://www.nongnu.org/quagga/

and here:

http://docs.cumulusnetworks.com/display/DOCS/Quagga+Overview

Moving on once you have done the above changes, you have to start the Quagga service by issuing the following command:

sudo service quagga start

Once done you are now ready to go into the vty shell provided by Quagga and configure OSPF routing there.

Just type ‘sudo vtysh’ and you will be taken into that.

The command line is very similar to Cisco’s IOS like platform. So if you are familiar with that you are good to go.

For configuring OSPF you have to remember only one that Cumulus Linux does not support more than one instance of OSPF. Hence when configuring do not mention the instance id in your command line. My config on Spine-1 was as below:

router ospf

 ospf router-id 0.0.0.1

 log-adjacency-changes detail

 network 10.1.1.0/24 area 0.0.0.0

 network 10.1.2.0/24 area 0.0.0.0

You can apply similar configuration to all your switches by advertising the correct networks.

Once done make sure you issue “wr mem” command to save the config to Quagga file. It will update the ‘/etc/quagga/quagga.conf’ file, because we have used integrated config.

There is another option to separate configuration of each protocol by not choosing the integrated config option and then Quagga will store the configuration of each protocol in it’s relevant config files. The files are available at ‘/etc/quagga/’.

Exit the vty shell and I would recommend to restart the networking and Quagga services by issuing the following commands.

sudo service networking restart

sudo service quagga restart

If you have reached so far correctly you should be able to see routes in your vty shell or in your linux bash shell.

In vty shell you can issue simple ‘ show ip route’ command and in linux bash shell you can issue ‘route –n’ command.

In my case on Spine-1 I have the following routes on bash shell.

Kernal IP routing for Cumulus-VX OSPF

I hope this information has been useful and please feel free to comment and correct anything I may have misrepresented.

Thanks for passing by.

Reasoning Network Automation, SDN and Network Virtualization – Part – 2

Recently I have been very busy in many dimensions where technology is just one small part and couldn’t get time for coming back to my blog and post my thoughts.

However today I had decided to take my discussion further in regards to Network Automation, SDN and Network virtualization. Starting with Network Virtualization.

In my previous post I had gone through different pain points in regards to what our legacy networking has to offer today.

Some of the highlighted pain points were: (For details you can follow my previous post)

  • Stuck with vertical integration of the boxes
  • Networking does not happen any more on Layer 3 of OSI Model
  • Resistance to change and hence restriction in innovation
  • Lack of flexibility
  • Static configuration and operation

Before I start however, I would like to mention that by no means, I think Networking as a technology is legacy but the way we operate networking till today has become legacy.

This point is worth mentioning because I am going to take my discussion around how we can bring a fresh look at the legacy networking by utilizing the above mentioned technology.

As I said let me start with Network Virtualization first.

I believe that Network Virtualization has been around for many years however we have been using it only partially; VLAN, VRF and GRE tunnels to create an overlays are just some examples.

The purpose of Network virtualization is to decouple your network from the underlay network, which is usually made out of Physical and Static devices and connection.

By means of Network virtualization we attempt to create a software based network that helps us to define the topology and network environment of our choice. By doing that we are ultimately removing the dependency of the physical and static environment. Not only that, in fact it provides many cost saving opportunities. For instance separating different types of traffic into different VLANs. If we were to do this based on physical switches, of course as you can imagine; you may end-up with a huge physical infrastructure footprint with higher costs to operate it. Not only that, physical infrastructure do not allow us to utilize the resources efficiently. For example if you were to provide connectivity to 5 host you may be bound to buy a 24-port switch because it’s the least option available in the market. However if you use VLANs you can take advantage of technology and efficiently utilize the physical infrastructure. The same logic applies to nearly all virtualization technologies. They provide flexible and efficient use of the infrastructure.

*** One thing that I would like to highlight here is that somewhere down the line the solution/technology we use to benefit us is defined by how much risk we can undertake. For instance by creating multiple VLANs, we compromise a little security risk of Data Leakage into another VLAN that may have been completely avoided by segregating the user physically. Anyway the idea is that the technology should be adopted to serve our needs and not the vice versa. In order to efficiently achieve this goat one has to understand not only the technology but its needs too.

Coming back to my point, apart from VLANs, GRE tunnels/VRF are some another examples of Network virtualization, in my opinion. Using these techniques you can not only segregate traffic but also control how your network topology looks like and shall behave. For instance you can create a topology of GRE tunnels and simulate to the user as its own topology.

Any way the idea of creating GRE Tunnels or other virtualization is not new but how can it be implemented or presented to the user that can make a difference.

One way to look at it could be to create different GRE Tunnels based Topologies for different customers. Think of it as a logical network over a physical network using plain IP as a transport. However when you present the topology to the end-user, it’s just a network topology like any other, on which he can build his own administered network functions or application infrastructure. As an end-user he would have no business with underlying physical infrastructure but will be more concerned with the availability of the network topology presented to him.

Similar approaches can be taken for other overlay/tunneling technologies such as VxLAN/STT.

So the question is, what is differentiating today from our past? The difference seen today is the agility and the use of these tunneling protocols to deliver networks much quicker and network functions more flexible than before (With the help of Programmability). All this with the introduction of Software Defined Networking (SDN).

One though that’s worth mentioning here is the difference between Network Overlays and Network Abstraction. You may also hear a lot of these two terms and may end-up asking the same question I asked to myself. That is, what’s the difference between these two? Although in my next post of SDN I will be talking about the abstraction, however I believe before ending this post I should make the point.

In simplest term overlays are built on top of other transport technologies/protocols. Overlays not only use the underlying physical infrastructure but also the transport protocol itself. Whereas the abstraction happens on the infrastructure level that can be physical or virtual. Think of abstraction as to be slicing the underlying infrastructure into two separate functions, i.e. to separate the control plane (brain) from the data plane (body) or the supervisor of a switch from it’s line cards. Of course to do so and in order for the brain to communicate with the body we need another communication protocol (Southbound Protocol) e.g. Openflow.

However once the abstraction is done, you may or may not build overlays on top of it to segregate different network domains for different customers. Kind of confusing? Right !!! 🙂

Please note that Openflow is just one of many protocols being used to maintain the southbound channel. Implementation such as BGP, NETCONF and many more are already in process. However Openflow is not only the first to be used to implement the abstraction but also the first to be set as a standard by ONF (Open Networking Foundation).

If you are more interested in following up with some industry’s implementations of Network Overlays and Abstraction; you can follow VMware NSX, Cisco ACI, Big Switch among others.

In next post I will take a step forward and talk about SDN and the abstraction model in bit more details.

Feel free to comment and correct anything I may have misrepresented.

A Perfect virtual Lab Switch – vEOS

Recently I have been looking for a Lab switch to interconnect my virtual and physical lab environment. What I was looking was not just that though.

I was looking to have a support of stretched L2 domains b/w different interfaces of Multiple ESXi-Hosts running on top of my VMware Workstation which in turn was running on Bare-Metal Worksation.

In summary I was looking to run a virtual switch on VMware Workstation and then have Multiple ESXi hosts connected to it and have the different interfaces of ESXi host stretched across other Servers running the VMware Workstation over L2.

My current setup contains a total of 3 Bare-Metal Workstations running Multiple ESXi hosts on top of VMware Workstation (Will post my lab topology in the upcoming post).

Apart from the above I am also planning to build some L3 routed networks and test some VXLAN and Openflow features with mixed environment in future.

After searching a bit I landed on a virtual switch by Arista running Extensible Operating System (EOS) i.e. vEOS. This switch seems to solve all my problems and meet my needs. So I thought to give it a test drive.

In this post I am going to give a brief overview of the EOS in a VM and walk through simple setting-up steps on VMware Workstation.

Overview

Source: (https://eos.arista.com/veos-running-eos-in-a-vm/)

EOS is released as a single image that supports all of our platforms. That same single image can even be run in a virtual machine! This article describes how to set up a virtual machine in order to test EOS functionality, or to develop and test your own extensions.

EOS in a VM

EOS run in a VM can be used to test almost all aspects of EOS, including:

  • Management Tools – CLI, Snmp, AAA, ZTP
  • L1 Connectivity – Link up/down (when connected to another EOS VM port), LLDP
  • L2 – VLANs, Port-channels, MLAG
  • L3 – Routed ports, Static routing, BGP, OSPF, VARP, VRRP
  • Extensibility – eAPI, python APIs to Sysdb, OpenFlow

Walk through installation steps

To get a copy of the vEOS is simple and straight forward. Register yourself on the following website and then download the following two files.

Register here: https://www.arista.com/en/support/software-download

Download the following two file:

  1. Aboot-veos-n.n.n.iso (where “n” represent the version). This is the boot loader.
  2. vEOS-n.n.nX.vmdk (This is the vEOS SWI image. “nX” donates the minor release.

I have the following:

Aboot-veos-2.1.0.iso

vEOS-4.14.2F.vmdk

Once you have the files start the “New Virtual Machine Wizard” and chose the “Custom” option and press Next.

Figure 1

From this window chose the Hardware compatibility version. I chose “Workstation 10.0” and press Next.

Figure 2

From the Guest Operation System Installation window browse to the Bootloader file i.e. the .iso file in my case its “Aboot-veos-2.1.0.iso”. This is not a usual installation disk and needs to be kept in the virtual CD-ROM every time we boot. As I said it’s a boot loader file and loads the image of the switch. Press next.

Figure 3

vEOS is based on Fedora Linux and hence in the next window you can chose either Linux as the Guest operating system and Fedora 64-bit as a version or even choose Linux and other Linux 64-bit version.

Figure 4

In the next window give your virtual machine a name that will appear in the VMware Workstation list.


Figure 5

Chose the number of CPUs in the Processor Configuration window. One is more than enough. It’s not that processor intensive.

Figure 6

In the Memory for the Virtual Machine window chose the amount of memory you want to allocate to the switch. 2GB of memory per vEOS instance is recommended, though 1GB is sufficient for most testing. Don’t go less than that. I have tried it with 512 and the VM starts crashing.

Figure 7

In the next window you can chose Network interfaces for the switch. I prefer that you chose one right now, be it in bridge, host or NAT mode. And select additional Network Interfaces needed at later stage. The first interface will be used as the Management interface of the Switch.

Figure 8

Chose the I/O Controller from the next window. Go with the LSI Logic (Recommended) option.

Figure 9

In the “Select a Disk Type” window you have to be careful. Chose IDE from the radio button and not the default SCSI disk type. The supported disk type by vEOS is IDE only as of now.

Figure 10

From the Select a Disk windows chose “Use an existing virtual disk” and press next.

Figure 11

Brows to the “.vmdk” file you downloaded in the “Select an Existing Disk” window.

Figure 12

Don’t press Finish yet at “Ready to Create Virtual Machine” window. Rather press the “Customize Hardware” button and remove unnecessary virtual hardware from the virtual machine, such as printer and sound card etc.

Figure 13

Figure 14 (With unnecessary hardware)

Remove any unnecessary hardware and add additional Network Adapter, one by one, by pressing the “Remove” and “Add” button respectively. In my case I added 4 for testing purpose only.

One dumb experience that I had with vEOS is worth mentioning here. While vEOS supports L3 feature, however if you only have one interface vEOS is smart enough to reject the CLI command to enable L3. The switch was much smarter than me, I suppose J.


Figure 15

Once done press Finish button, although we are not finish yet.

Figure 16

After the virtual machine shows up in VMware Workstation Edit the Virtual Machine Setting for Advance Disk settings.


Figure 17

While booting up the vEOS it is important that both the CD-ROM and Hard disk are on the same IDE bus. To do that you have to select CD-ROM and Hard disk from the device list in the Virtual Machine Setting window and press the Advance button on the right.


Figure 18

Figure 19

Make sure both CD-ROM and Hard Disk are on the same IDE Bus. That they both should be either IDE 0:n or IDE 1:n. You have to change it form the drop down menu in the “Virtual Device Node” sub window.

Figure 20

If you miss the above you will see that the system will boot in Aboot# menu of the Bootloader and will get stuck there. This is purely because hard disk containing the image is not found/mounted by the Bootloader and hence the system cannot boot further.

So as I said both CD-ROM and Hard disk need to be on the same bus bar and I would prefer that you put CD on IDE 0:0 and Hard Disk on IDE 0:1, even though the reverse will work too.

Another important factor to keep in mind is that when you are going to boot into the vEOS it will apply the configuration on the current Hard Disk you specified as the “.vmdk” file. So if you want to run multiple vEOS instances in your Lab you better make a copy of the Hard disk before changing anything onto it.

After you have completed the above step press ok and go ahead boot your VM.

Below is a screen shot of a VM successfully loading the vEOS. As you can see the boot loader has found the vEOS.swi (image) file on the flash (the .vmdk).

Figure 21

Figure 22

Once the switch has successfully booted you will receive the following login prompt.

Figure 23

Login with username “admin” and no password.

Figure 24

From here you can start playing with the switch.

One thing that I found interesting, in addition to a lot of other stuff is that the CLI of vEOS is a Cisco style CLI and much easier to adapt.

So go ahead and give the usual command of “enable” to enter enable mode. Go into the configuration mode and start configuring the enable password, IP addresses for different interfaces etc.

I hope you have enjoyed this post and have a vEOS switch running successfully. In the next post I will be covering more insight on my lab topology on how I have used vEOS to setup my home lab.

Stay tuned and Enjoy Labbing J

Feel free to comment and correct anything I may have misrepresented.

Some Useful links:

https://eos.arista.com/

https://eos.arista.com/category/top/dev-blog/

https://eos.arista.com/veos-running-eos-in-a-vm/

https://www.arista.com/en/support/docs

Reasoning Network Automation, SDN and Network Virtualization – Part – 1

I wanted to take some time today and write this post to initially share my thoughts on what Current trends in Networking Industry such as Network Automation (DevOps), SDN and Network Virtualization means to me and what I would expect out of this evolution.

As you may have already noticed that there has been a lot of discussions and works going around amongst the technologists and vendors to find out the best way of taking the Legacy Networking (I say legacy networking because it has been the same for past nearly 20 to 25 years J) to the next level. While it is really impressive how different minds come together to solve similar problems it is also impressive to see what a lot people miss out when it comes to basic reasoning from where they started.

Any way will come back to this later, however I want to start with a problem statement to describe the problem first and then discuss how current trends can/may solve the problems.

Problem with Legacy Networking:

Stuck with vertical integration of the boxes

The first and far most problem I see with the Legacy Networking is that we are stuck with box vendors. Networking is still the same as it was 20 – 25 years before, but has become more and more complex by solving the business and users’ needs through integrating networking into same physical boxes. And who wouldn’t agree with me in having as many as possible networking vendor’s (e.g. Cisco, Juniper, Brocade) certifications in different to prove that you are a network engineer. The notion of becoming a Box oriented network engineer has somehow faded the lust for the real knowledge and has in result not only limited the innovation but also suppressed any innovative minds to be stuck with the overall business strategy of that vendor. In result the moment you change the box, you may end up changing the way you use that technology, hence the complication.

Networking does not happen any more on Layer 3 of OSI Model.

First of all I would like to state my opinion of Networks i.e. Networks are there to facilitate with communication and are a simple mean to transport IP Packet J through the means of route calculation methodology.

On another note Networks are also the means of daily use for the end users through applications. Application need communication platform and hence communication platform needs to provide an architecture/model to ease the use at the user level not at the complex engineering/architectural level.

However the problem is that day by day user requirements are changing and applications serving those users are adapting the change quite rapidly (Due to the fact they have smaller development cycle), but networking is not and in fact has not kept the pace with that change at all. Instead what vertically integrated solution (Networking vendors) have done is that they have introduced a complexity within the already complex environment to overcome (I say overcome not adopt for a reason) the change.

Not only that Slowly slowly this approach has further merged the networking layer with upper layers and created new models of network services e.g. security (Firewalls), application delivery (Load balancer) and hence we ended up with jumping from the plain old Layer 3 of OSI model to Layer 7 OSI model, without realizing that we initially built these networks to route packets and routers to perform Packet Forwarding and not becoming a super-giant Network to Application Layer forwarding machine. And above all running on same Layer 3 foundation built to perform, you got it! “PACKET Routing”. “I hope you realize where I am going with itJ“.

Networking has been simple from the beginning at layer 3 but has become quite complex by moving it up to Layer 7.

However one has to admire the innovation that has benefited the networking too. One example of that would be “MPLS”. I believe MPLS has been a very unique experience of networking evolution and again if I may iterate the fact, the reason it has been beneficial is because we have put our efforts in finding the best solution for improving networking and not the other way around “I described above”.

Resistance to change and hence restriction in innovation.

The challenge I understand while working with many networking vendors, is that they have been always interested in solving newer problems or needs of the user/application in their vertically integrated solutions. That implies you keep their infrastructure, they will find a solution for you and you may end up adopting it however with their terms and condition. When I say terms and condition I mean to say the partner eco system that you have to stick with in order to fulfill your requirement. This however has resulted in two things:

  1. Lack of innovation in the hardware
  2. Increase in complexity of the overall networking solution

The challenge for hardware vendors had been adoption and that of course is understandable, because to program the hardware to perform a new task other than what it was built for, will require much more time than the user/business can wait. Usually to overcome a new requirement requiring that kind of change hardware vendors are like to introduce you with either a new mutli-million dollar equipment, which is specifically designed to do that task (or may be few others), or a complex solution with complex configurations ending up with some compromises and configuration/operational overhead to user/business.

Lack of flexibility:

The solutions offered by different networking vendors are usually posed as flexible by showing a big eco-system of their partners. However again this implies that the user/business has to be stuck with those vendors for the life cycle of the infrastructure. I have seen this in real life and I have literally seen how those vendor cash in the user/business for their position.

Having said the above I do not feel this is a flexible solution at all, because the solution is only possible with specific vendors. And is not like MPLS open to anyone.

Static configuration and operation

To all the above comes the final part of configuration, deployment and operation. With my experience in networking and having worked from bottom to top in the networking industry. I would say that the configuration of the networking equipment can become quite complex when it comes to such deployments. They are not only complex for the resources to be trained on but also complex in adopting by the user (Increased OPEX).

Not to mention the challenges in adopting one way of deploying a solution vs. the other (No fluidity). I have seen deployments in my careers that tend to had taken one way of implementing the solution and ended up being so complex later that any change for future needs become impossible and not only that maintaining the infrastructure becomes all together another challenge for the users.

In summary I conclude the problems with:

  1. Limited innovation in Hardware
  2. Configuration Complexity
  3. Increased operational overhead while maintaining complex networks
  4. Complexity in meeting daily demands of business (Agility)
  5. Lack of simplicity where it is needed
  6. Openness

 

In next post I will mention how future trends in Network Automation (DevOps), SDN, and Network Virtualization are changing the above mentioned challenges.

What is NetVirt all about?

Well I want to take a moment and share what I am planning to write about in this blog.

First of all I would like to share that I am very passionate about networking and related technologies, especially when it comes to integrating different technologies at different layers and different vendors.

Having said that, my experience has been all around configuration, design, architecture to delivery. When I enjoyed all these different tastes, I wanted to do something different and went to experience the customer side. Things have been quite challenging both side. If in one side you have to keep yourself updated with all the technologies and justify your design and architecture and make sure you deliver in the end. On the other side you have enjoy the environment you are in and work on optimization, scale, management, and ease of use. Needless to say both have their advantages and disadvantages. (Will talk about those sometime later in my blog)

So coming back to my point above, what I want to write about? That is I want to write about my experience and want to share my opinion about:

  • New ways of networking such as Software Defined Networking (SDN), Network Virtualization (NV), Network Function Virtualization (NFV) and Network Automation
  • Designing & Architecting Networks
  • How customer perceive the technology and how they can benefit
  • Scaling, Optimizing and improving networks
  • Last but not the least about what I like about solutions provided by industries
  • Creating Testbed of interesting Solutions as and when possible.

I think with that enough said :).

However I want to say one last thing and that is: Why I never blogged before and what has urged me now to do so. With that I will also try to put my first ever understanding of some current trends in networking worlds on my blog.

  • The first things why I start blogging is that I am a strong believer in sharing your thoughts and listening to other which gives you the opportunity to learn more and get the most out of it. Before I had been teaching a lot and this need was fulfilled by it. However I have stopped teaching for a while and hence wanted to start a blog to Share, Learn and collaborate with the world.
  • Secondly while it’s true I have spent very interesting 12 years in networking world, I, somehow was getting bored with the different ways networking was done. Being a technology lover I was in the search of something new and guess what I found it.

I found two things:

  • Network Virtualization and SDN: I am using Network Virtualization as a broad term not to describe legacy Network virtualization like MPLS, VRF-Lite or VLAN, which I have been doing for a while. Rather I am referring to new approaches to network virtualization like NFV and modern Physical network abstraction solution like VMware NSX.

By SDN I mean the concept of extracting the control from the physical boxes and either place it in centralized or decentralized location. Following this concept there has been a lot of hype in the world of networking. In result some vendor has introduced a model of purely having the controller doing most of the job of Control and let the switches to the forwarding only and some others have introduced a hybrid model to have a mix and match of both i.e. to have a controller to do the job of control and equally keep some control on the switches in the field e.g. Cisco ACI.

  • Network Automation and Orchestration: No doubts that automation has been around for a while in networks through Network Management Systems or other Scripts based approaches, but I would tend to say that we had always focused on Network Device configuration level and missed out the big picture of the Network itself. One example is on-demand deployment of network services and functions (NSX, Openstack (Neutron module), OpenDaylight, NFV). Another example is to create a management and orchestration layer for the physical networks like Application Centric Infrastructure (ACI, can do more as well) & Cisco Application Policy Infrastructure Controller, Enterprise Module (APIC-EM).

So here you go! Now you know what I will be blogging here and what I think about Networking Virtualization, SDN, NFV and things like Network Automation and Orchestration, to start with.

I hope you have enjoyed my first blog and I will welcome any comments from you.

Why I created this blog?

Hello All,

My name is Mirza Waqas Ahmed and I am just another guy in the world of networking. While in that world I have gone through many experiences in my life and was thinking to share them over to all of you in order to learn and experience more out of it.

I have been into networking industry since 12 years now and have seen different technologies coming together however nothing impressed me as Network Virtualization did. Needless to say I wanted to learn more about it and there is nothing else better than blogging and listening to the views of the world.

So here I am and starting this blog to share my thoughts and listen to everyone else.

I hope this blog will be a success and I can share and learn the best out of it.

Thanks

Waqas