Press "Enter" to skip to content

Category: VMware

How To: Upgrade vCSA 6.0 to vCSA 6.5

Today marks the release of vSphere 6.5 and with that, a new vCenter Server Appliance that is worth paying attention to. Beyond the traditional boost of configuration maximums and security, this version comes loaded with features that have been requested over the past few years. Some highlights include:

  • Built-in migration tool to go from vCenter Server 5.5 or 6.0 to vCSA 6.5
  • Built-in VMware Update Manager
  • Native HA support deployed in Active-Passive-Witness architecture
  • No Client Integration Browser Plug-in
  • Adobe Flex AND HTML5 web clients
  • API Explorer via vCSA portal page for your automation needs
  • Tons of other little enhancements that you can read about here
  • This post will be a guide on getting you from vCSA 6 to 6.5 with setting up vCSA HA at a later date.

    Crack open the ISO in your preferred flavor of OS and run the vCenter Server Appliance Installer. You’ll be greeted by this step1

    Hit next, accept the EULA, and fill out your environmental info. FQDN/IP of 6.0 vCSA, SSO details, and the ESXi host info that is currently housing your 6.0 vCSA. step2

    Now, if you are running VUM on a Windows server in your environment, you will see the following error: Unable to retrieve the migration assistant extension on source vCenter Server. Make sure migration assistant is running on the VUM server. Copy the ‘migration-assistant’ folder to the VUM server and run ‘VMware-Migration-Assistant.exe’, type in the password for the VUM service account and return back to the vCSA 6.5 Installer. step4

    The next few pages are choosing your cluster resources, folder organization, and general deployment information. Since this was done in my lab, I chose to stick with the ‘tiny’ vCenter deployment since I do not expect to ever need anything larger than that… hopefully. step7step8step9step10step11

    Once all that is done and dusted, you will get to the confirmation page to verify you didn’t fat finger any settings. If they all look good, click Finish. step12

    Assuming everything was chosen properly, you will see this lovely screen step16
    Congrats, you now have 2 vCSA’s running… but that’s not what we are here for. We want to decommission the 6.0 in favor of 6.5 with all of our lovely settings. So let’s get that crackin’


    Hit next and fill in your vCSA 6.0 info as well as the host that is running your 6.0 vCSA. You may get a warning about DRS being enabled on the cluster so feel free to change that setting depending on if your settings are set too aggressively.

    Next you will choose what data you wish to migrate from your old 6.0 to your new 6.5. I wanted all that lovely historical data so I went with the longer, last option. step19

    After that, you should be good to go! You will see some progress bars and then greeted with links to your shiny, new 6.5 vCSA. *Hint* It’s the same info as your 6.0, thanks migrations!


    After you login, check out your About vSphere menu and you should see vSphere 6.5 listed as current build. You will also notice that your original 6.0 VM is powered off and can be decommissioned to your liking. step23

    From there, you can hop into the Update Manager tab and upgrade your hosts to 6.5 automatically as well! Happy trails, friends and enjoy all the new awesomeness that vCSA 6.5 has dropped into your lap.

    Comments closed

    DISA Approves STIGs for VMware NSX on DoD Networks

    NSX STIG Photo credit: @_stump

    There a lot of abbreviations in this title so I will give a very brief rundown on what it all means and why some of you should care.

    In the public sector, our systems are hardened (locked down) a bit more drastically than your traditional private company might do things. Simply deploying a fresh copy of Windows from ISO is prohibited unless strictly spelled out in your lab environment. The governing body who regulates these mandatory compliance settings is known as the Defense Information Systems Agency, or DISA for short. They work closely with the product teams to ensure that when said product is deployed onto a network, it is as secure as possible while still maintaining functionality. These guides are known as STIGs or security technical implementation guides.

    With DISA approving the NSX STIGs, VMware’s NSX becomes the first software-defined network solution to do so.

    Now, as anyone who has deployed STIGs knows, sometimes the settings within them have a tendency to break previous functionality. With that said, take your time, test everything as you implement, and don’t be afraid to take note of any exemptions your project may need to adjust. Work closely with your ISSO’s and document everything up front as it will save you pain as you go along.

    Here are links to the direct zip’s for the STIGs above:

    VMware NSX STIG Overview, Version 1
    VMware NSX Manager STIG, Version 1
    VMware NSX Distributed Firewall STIG, Version 1
    VMware NSX Distributed Logical Router STIG, Version 1

    Comments closed

    VMware Cloud Foundation – At a glance

    Today was the first day of VMworld 2016 in Las Vegas and within the first of two General Sessions at the conference, VMware announced a new product dubbed Cloud Foundation. Cloud Foundation is a full stack integrated solution that will allow for seamless transition of your enterprise to utilize private and public clouds all in an automated, easily managed solution.

    Here is what you need to know about it:

    • Integrates vSphere, NSX, and VSAN into a unified stack managed by a new tool called SDDC Manager
    • SDDC Manager automates and simplifies the deployment of the entire stack
    • SDDC Manager does NOT take the place of vCenter
    • Cloud Foundation minimum requirements are 8 nodes as of writing (min. 4 nodes coming soon)
    • 1 vCenter per SDDC Manager
    • SDDC Manager can manage up to 192 nodes
    • vRA does not come with Cloud Foundation but can be used in conjunction with vCF (SDDC Manager handles the underlying infrastructure, vRA handles VM’s)
    • vCF via NSX extensibility allows for seamless migration between private and public clouds
    • As of writing, IBM Softlayer is only public cloud supported (Obviously, Azure and AWS enroute) and VCE VxRack 1000 is the only full rack supported solution
    • Cisco and Arista are your ToR and Spine supported network solutions for the time being
    • vCF supports VSAN Ready Nodes (8 of them minimum) as well
    • Licensing is per CPU
    • GA is September 1, 2016

    There ya have it, the quick and dirty rundown of what vCF brings to the table. VMware has teased this type of solution before in regards to EVO SDDC which is being retired as of September 1 in lieu of vCF. vCF is bringing more IP to the table and is what EVO SDDC should have been when first announced.

    Comments closed

    vSphere Thick Client? I don’t need no stinkin’ thick client!

    Quick post about an awesome, new VMware Fling that was released somewhat recently. I’m a little late to the party but I haven’t needed to deploy a new host since its release until today.

    If you haven’t heard, VMware Flings are small applications built by the VMware engineers that aren’t officially supported immediately but can still prove very powerful for your environment. Recently, they addressed something that bugged me from the day VMware announced that the web client was the future and the C# thick client was going bye-bye. Long story short, if you wanted to directly interface with an ESXi host without vCenter middle-manning, you were left with either PowerCLI, SSH, or the bloated C# client.

    This new fling is called the ESXi Embedded Host Client, a lightweight web client installed on your hosts that gives you a familiar vCenter web client experience. It takes about a minute to install via a one line esxcli command.

    SSH into your host(s) and execute this command:

    That will pull down the latest version of the Fling from VMware’s servers and auto-install. From that point on, you can use your favorite flavor of browser and point to the DNS/IP of your host and interface with it as you please.

    If you find you love this Fling and want to deploy it your datacenter, Brian Graf wrote a nice pCLI script to automate the whole ordeal which you find here.

    Comments closed

    Upgrading vCSA 6.0 to 6.0U1 without VAMI

    Quick post here on updating your vCSA 6.0 in what I believe to be the fastest way to update your vCSA installation. The VAMI that comes with vCSA is a great little tool but I find it to be hit or miss at times so I wanted to find a more reliable and visible way to upgrade. Behold the baked in software-packages tool via SSH!

    1) Go to and search for the latest VC patches and download
    2) Upload to a datastore visible by your vCSA
    3) Attach *.iso to the vCSA VM
    4) SSH into the vCSA with your root credentials
    5) Run software-packages install –iso –acceptEulas and wait for the update to finish, it should look like:

    6) Reboot vCSA via shutdown reboot -r updating and rejoice!

    Comments closed

    vCSA 6.0 CLI Install

    It is the week following VMworld 2015 so that marks my annual homelab wipe. I normally do this after every VMworld due to the renewed urge to test out all the new tricks I learned over the course of the week. In doing this, I decided to dunk the new vCSA6 web installer in lieu of the CLI. I work in environments where browsers are typically locked down to the point of supreme frustration so below you’ll find a faster, in my opinion, way of deploying the vCSA using JSON.

    Note: This is simply a fully embedded vCSA install using the vPostGres database and localized SSO.

    First, mount the vCSA ISO to your system (or extract the ISO if that is your preference), fire up command prompt/terminal, and change directories to the “vcsa-cli-installer” folder. As you can see, there are tools for OS X, Windows, and Linux so you can pull this off on any system you prefer.

    The tool which is utilized is vcsa-deploy. This application can accept direct parameters or reference a JSON file, which is what this article will concentrate on. Simply run

    and away you go.

    Below you will find the JSON file I created for my install.

    As you can see, it’s pretty self explanatory so just adjust the settings that fit your environment and fire! You can also check out /vcsa-cli-installer/templates/full_conf.json for more settings if you are curious.

    Assuming your JSON is formatted properly, you should see output similar to this:

    And there you have it, a fresh vCSA 6.0 install without having to use the web installer.

    Comments closed

    Quick Script: Syslog Server Updater

    Recently deployed a new syslog server and needed a script to update the ~20+ ESXi hosts as fast as possible. This is pretty cut and dry in terms of what happens… prompts for vCenter and Syslog addresses, then it updates the Syslog Server field on each host associated with that vCenter as well as allowing UDP/514 through the ESXi firewall.

    Comments closed

    vCAC and Linux Guest Agent How-To and Gotchas

    Earlier this week, I ran into an issue in a new environment that I had just deployed. The vCloud stack was installed as vCAC 6.1 Appliance, external vCO, and vCAC IaaS VM running on Windows Server 2012R2.

    In this post, we’ll run through setting up a CentOS VM with the vCAC guest agent in order to get all the goodies that come with it like the management of new disks, new networking, as well as execution of scripts after deployment. This tutorial can be applied to other distros like Ubuntu, Debian, or SLES but for this example, I kept it in the EL6 family.

    What you’ll need:
    CentOS VM
    Linux Guest Agent Packages
    Certificate file from your vCAC IaaS Server
    DNS working properly

    Since this VM will be a template, I won’t tell you what you should or shouldn’t put on it, but may I suggest giving ‘yum update -y’ a little love? After that is completed, you need to get the LGA (Linux Guest Agent) Packages onto the server. This zip is located on your vCAC server at port 5480/installer, e.g. https://vCAC-server.local:5480/installer. Feel free to use SCP or wget with the –no-check-certificate flag. Lastly, explode the zip to the directory of your choice.

    Next, you need to install the certificate of the IaaS server you deployed. Whether it was self-signed or from a CA, we need a copy of it on our soon-to-be template VM. Easiest way is to use the browser of your choice and go to your IaaS FQDN, e.g. https://IaaS-server.local/, then click the lock on the far left of the address bar, get certificate information, Details tab, then Copy to file leaving it as an encoded X.509 .CER and saving it wherever you choose. SCP this file onto your VM, we’ll come back to it in a moment.

    Now let’s get to installing the actual agent. Change directories to where you unzipped the prior package and go into the architecture of your distro. In our case, we’re going into /rhel6-amd64 and then running:

    This will install itself to /usr/share/gugent/ so change directories to that path. Remember the IaaS cert? Now is the time to copy it to /usr/share/gugent/axis2/ and run:

    Note: If you open /usr/share/gugent/axis2.xml, you can change the final name and path of where the cert file will exist. By default, the cert file will be named cert.pem and reside in /usr/share/gugent/axis2/
    Now run the install script in /usr/share/gugent as such:

    To verify everything is working properly, run ./ and ensure all you see are [Debug] and not [Error] messages.

    If you do see errors, they’re most likely cert related, grep through /usr/share/gugent/axis2/logs/gugent-axis.log to verify. If you see:

    [info] [ssl client] Client certificate chain filenot specified
    [error] ssl/ssl_utils.c(153) Error occured in SSL engine
    [error] ssl/ssl_stream.c(108) Error occured in SSL engine

    Ensure you have placed the cert in the correct directory and/or modified axis2.xml to reflect wherever the finalized cert.pem exists. You will know you’re good to go once you see:

    [Thu Mar 19 15:58:37 2015] [debug] ssl/ssl_utils.c(190) [ssl client] SSL certificate verified against peer

    Now finish setting up your template to your liking with a kickstart script and you’re done!

    Comments closed

    The heavens parted and then… ESXi Heartbleed patch!

    Better late than never, yea? Quick Saturday post, here is how to get your host up to date real quick via SSH, generate new certs, and change the root password. Better safe than sorry, friends.

    Note: This is only for ESXi 5.5 Update 1! If you are not running 5.5u1, replace ESXi-5.5.0-20140404001-standard with ESXi-5.5.0-20140401020s-standard.

    This will be the quick way to do it, your environment may not let you turn on the built-in httpclient within ESXi but I am going to assume that will not be an issue. And because I am currently doing these patches on my homelab where I am the boss!

    Enable SSH on your host(s) and remote in via terminal/putty

    Enable the ESXi built-in httpclient:

    Pull down and install the patch:

    Backup your ‘old’ SSL keys:

    Generate new keys and chmod them:

    Reboot the host:

    Once the host comes back up, SSH back in and change the root password: passwd root

    That’s all there is to it. These types of security issues are no fun for anyone but it comes with the territory. Cheers!

    VMware KB#2076665 – Resolving OpenSSL Heartbleed for ESXi 5.5

    Comments closed

    Centralized rsyslog with ESXi 5.x hosts

    One of the most important things in any environment is the syslog server. A centralized host to keep all the debug, runtime, and access information to be sent to your Kibana/Logstash or Splunk implementations will make any sysadmins life easier. The walk-through below sets up a central server running rsyslog, accepting logs on 514 from TCP and UDP, as well as placing them in dated folders for easier organization. Let’s dive in:

    Create a dump folder for your syslog structure:

    Edit /etc/rsyslog.conf and remove the comments for TCP and UDP reception as well as change receiving port to your liking:

    Create a conf file within /etc/rsyslog.d (e.g. daily_log.conf) and define the daily rotation:

    Recycle the rsyslog service:

    That covers the syslog server side of things, now to get rid of that annoying ‘system logs are not on persistent storage’ warning.

    You can add this info to a host profile and apply it against all your hosts if your environment is large, but for example purposes, this will be a one-off host. You can also easily set this up via pCLI script.

    Display your current settings:

    Adjust syslog settings:

    Recycle ESXi syslog service:

    Open up syslog ports on ESXi firewall:

    And that’s it! Now on your syslogd server, you should see a directory path similar to /var/log/syslogd/year/month/day/hosts*.log

    From here on out, you can point all of your log analyzers to the centralized syslog server and keep an eye on your ESXi hosts. Cheers!

    Comments closed