Posts Tagged ‘upu’

Managing ports for multiple FreeBSD servers

Monday, July 31st, 2017

This is a follow up post on how to manage ports for multiple FreeBSD servers. If you’re looking for how to update the operating system itself, have a look at my almost three years old post: Managing multiple FreeBSD servers.

Alright, so what we’re trying to solve is this: multiple VMs running the same (or different) release of FreeBSD, and you’re looking for a way to centralize delivery of packages to your FreeBSD VMs.

(more…)

make buildworld & IBM x3550 m3

Saturday, June 24th, 2017

Upgrade from 11.0-RELEASE to 11.1-BETA3

Brand: IBM x3550 m3

In order to successfully boot the server make sure to enable legacy support as per this thread.

Processor: 2 x Intel Xeon E5620 2.40GHz (4 cores each)
Memory: 8GB
HDD: 2 x 146GB (10k RPM, 6Gbps SAS 2.5-inch) in RAID1

Softupdates: ON
SMP: ON

  1. CPU: Intel(R) Xeon(R) CPU E5620  @ 2.40GHz (2400.13-MHz K8-class CPU)
  2. real memory  = 8589934592 (8192 MB)
  3. avail memory = 8244543488 (7862 MB)
  4. mfi0: <LSI MegaSAS Gen2> port 0x1000-0x10ff mem 0x97940000-0x97943fff,0x97900000-0x9793ffff irq 16 at device 0.0 on pci11
  5. mfi0: Using MSI
  6. mfi0: Megaraid SAS driver Ver 4.23
  7. mfi0: FW MaxCmds = 1008, limiting to 128
  8. mfid0: 139236MB (285155328 sectors) RAID volume (no label) is optimal

make -j4 buildworld: 1h 36m 28s
make -j4 buildkernel: 5m 58s
make installkernel: 13s
make installworld: 3m 32s

DRBD with OCFS2 and fstab

Sunday, May 28th, 2017

Two-nodes active/active DRBD cluster implemented on Debian Jessie with OCFS2 on top of it, so the file system can be mounted and accessed on both nodes at the same time. Sounds like a easy-peasy task considering the amount of articles on the web (mainly copy/paste of the same content though).

So, you finish with the setup, everything is synced and shiny, you edit fstab, perform the final reboot, and… oopsie daisy, nothing is mounted. You start digging into _netdev direction, or suspecting that perhaps an order in which drbd and ocfs2 are started is to blame, or putting mount stanza into rc.local — none of this helps. You might even come up with an excuse that you will not reboot those servers often, however, the fact that you need to manually perform some post-reboot actions doesn’t sound promising at all. Particularly if it’s an unexpected reboot over a weekend. Particularly if it happened some years after the installation hence you need to find (and most importantly, keep in mind about) those notes. Particularly if you already quit this job, and there is another poor fella taking care of the servers. And finally, to make things even more complicated, you might have services that actually depend on the availability of the mounted drive after the reboot (Apache or Samba for example).

Obviously, this needs to be fixed once and for all, and I have good news for you. :) If you were vigilant enough during troubleshooting you’d notice that a) if you try to mount the drive through /etc/rc.local there will be a warning thrown at boot time (something about missing device), and b) when you mount drbd drive manually it’s not mounted instantly — there is several seconds delay before the disk is successfully attached. That brought me to the suspicion that perhaps drbd is actually not ready at the time mount in /etc/rc.local is executed, and by deliberately introducing some delay things can be improved. And voila — it really did seem to do the trick!

Here is my /etc/fstab entry:

  1. /dev/drbd0   /var/www   ocfs2   noauto,noatime   0   0

And here is my /etc/rc.local introducing 30 seconds delay prior to mount, to give enough time for DRBD to cool down:

  1. sleep 30
  2. mount /dev/drbd0
  3. exit 0

Now, I’m not sure whether this is by design, since DRBD nodes do have to communicate with each other (initial election and/or sync), and that contributes to the delay in creating /dev/drbd0, OR, my environment is generally slow (everything is virtualized on not-so-super-fast SATA drives), but it works.

NSD and OpenDNSSEC under FreeBSD 10 [Part 5: SafeNet HSM]

Wednesday, May 18th, 2016

This is the fifth part in the series of articles explaining how to run NSD and OpenDNSSEC under FreeBSD 10.

This time we’re going to integrate proper hardware HSM support in our setup — a pair of SafeNet Network HSMs (aka Luna SA).

Here is how our updated installation diagram looks like:

2016051701

Before we jump into technical details there are a couple of assumptions:

— I assume that HSMs are already configured and partitioned. HSM installation is outside of scope of this guide since it’s a lengthy and pretty time consuming process which has nothing to do with OpenDNSSEC. It also involves a big chunk of work to be done on the access federation field (different teams accessing different partitions with different PEDs or passwords). SafeNet HSM’s documentation is quite solid though, so make sure this part is completed. In our setup, both HSMs run the latest software 6.2.0-15 and there is one partition created on both units called TEST. TEST partition is activated and we’re going to create High Availability group, add both HSMs to the HA group and allow NS-SIGN to access it;

— As you might have noticed, I decided to leave ZSKs to be handled by SoftHSM. One of the things that you’ll have to keep an eye on with network HSMs is the HDD space. The way it works with SafeNet is that you have an appliance with some fixed amount of disk space (let’s say 2MB). Then you create partitions and allocate space out of total amount for each partition (by default it’s equal distribution). So let’s assume we created five partitions 417274 bytes each. Normally, storing a pair of public/private key consumes very little, but with OpenDNSSEC we’re talking about a number of domains each storing a pair of public/private keys for both KSK and ZSK. It’s very important to understand how far you can go, so you’re not surprised after several years when you discover that you run out of space.

Let’s do some basic math: one domain, with both ZSK (1024) and KSK (2048) stored on HSM, will consume 2768 bytes, so with 417274 bytes partition you should be able to handle ~150 domains. However, during ZSK or KSK rollover, another pair will be temporarily created, and although ZSK/KSK rollover shouldn’t happen at the same time and OpenDNSSEC will purge expired keys after the rollover is completed, you’ll have to consider extra 2768 bytes per domain (for a period of time defined in <Purge> stanza in kasp.xml), which leaves you 75 domains. As you can see this isn’t much. That’s why I decided to keep SoftHSM for ZSKs to save some HSM space (which is not cheap to say the least!).

One of the disadvantages of keeping both storage engines is that you’ll have one more dependency to worry about should you consider to upgrade (for example to SoftHSM2), hence the choice is yours. Another option would be to store private keys in HSM and leave public keys aside (<SkipPublicKey/> option under conf.xml), but I’ve read that it’s very much dependent on the HSM provider and could lead to unexpected results. And one more option would be to use <ShareKeys/> under kasp.xml — that way you can share the same key for multiple domains.

(more…)

Viewing package ChangeLog with rpm

Monday, April 4th, 2016

Here is how to view the ChangeLog of installed package using rpm under CentOS:

  1. rpm -q —-changelog libuuid-2.23.2-26.el7_2.2.x86_64 | more
  2.  
  3. * Wed Mar 16 2016 Karel Zak <kzak@redhat.com> 2.23.2-26.el7_2.2
  4. – fix #1317953 – lslogins crash when executed with buggy username

Same applies to the kernel. By adding -p switch you can actually check the rpm file itself without installing it:

  1. rpm -qp —-changelog kernel-plus-3.10.0-327.13.1.el7.centos.plus.x86_64.rpm | more
  2.  
  3. * Thu Mar 31 2016 Akemi Yagi <toracat@centos.org> [3.10.0-327.13.1.el7.centos.plus]
  4. – Apply debranding changes
  5. – Roll in i686 mods
  6. – Modify config file for x86_64 with extra features turned on including

FreeBSD template for ManageEngine OpManager

Friday, March 18th, 2016

We use OpManager by ManageEngine to monitor our infrastructure. Most of Linux flavors are already covered by default templates in OpManager. Moreover, you’ll be able to get interface statistics and CPU/RAM utilization of FreeBSD servers with the included UCD SNMP MIBs. The only bit that was missing was the monitoring of partitions for FreeBSD, hence I decided to spend a bit of my time and finally make the template that could be used in OpManager to monitor FreeBSD servers.

It’s confirmed to work with the latest OpManager 11 (build 11600) and FreeBSD 10.x without UCD Net-SNMP installed but only bsnmp with bsnmp-ucd. The reason why bsnmp is simple: bsnmp is light and is part of the base FreeBSD, so you don’t need to install anything and bsnmp-ucd (available under /usr/ports/net-mgmt/bsnmp-ucd) is a module for bsnmpd which implements parts of UCD-SNMP-MIB, while UCD Net-SNMP requires a massive amount of dependencies to be installed.

Once bsnmp-ucd is installed you might want to enable ucd module in /etc/snmpd.config and restart bsnmpd:

  1. # UCD module
  2. begemotSnmpdModulePath."ucd" = "/usr/local/lib/snmp_ucd.so"

So here we go (you can also download it from here, just make sure to change the extension to XML):

  1. <?xml version="1.0" encoding="UTF-8"?>
  2. <CustomDevicePackage>
  3. <CustomDevicePackage deviceType="FreeBSD" iconName="linux.png" pingInterval="5">
  4. <SysOIDs>
  5. <SysOID oid=".1.3.6.1.4.1.12325.1.1.2.1.1"/>
  6. </SysOIDs>
  7. <GRAPHDETAILS>
  8. <Graph SaveAbsolutes="false" YAXISTEXT="Percentage" customGraph="false" description="Monitors the Memory utilization based on UCD SNMP MIB" displayName="Memory Utilization(UCD SNM MIB)" failureThreshold="1" graphID="966" graphName="Lin-MemoryUtilization" graphType="node" isNumeric="true" oid="(.1.3.6.1.4.1.2021.4.5.0-.1.3.6.1.4.1.2021.4.6.0-.1.3.6.1.4.1.2021.4.14.0-.1.3.6.1.4.1.2021.4.15.0)*100/.1.3.6.1.4.1.2021.4.5.0" period="900" protocol="SNMP" sSave="true" timeAvg="false">
  9. <OTHEROIDS/>
  10. </Graph>
  11. <Graph SaveAbsolutes="false" YAXISTEXT="Percentage" customGraph="false" description="Monitors the CPU Utilization based on UCD SNMP MIB" displayName="CPU Utilization(UCD SNMP MIB)" failureThreshold="1" graphID="315" graphName="Lin-CPUUtilization" graphType="node" isNumeric="true" oid=".1.3.6.1.4.1.2021.11.9.0" period="900" protocol="SNMP" sSave="true" timeAvg="false">
  12. <OTHEROIDS/>
  13. </Graph>
  14. <Graph DisplayColumn=".1.3.6.1.4.1.2021.9.1.2" Index=".1.3.6.1.4.1.2021.9.1.1" SaveAbsolutes="false" YAXISTEXT="Percentage" customGraph="false" description="Monitoring the usage in each partition of the FreeBSD Device." displayName="Partition Details of the FreeBSD Device (%)" failureThreshold="1" graphID="252000" graphName="BSDPartitionWiseDiskDetails" graphType="multiplenode" isNumeric="true" oid="(.1.3.6.1.4.1.2021.9.1.8*100/.1.3.6.1.4.1.2021.9.1.7)" period="900" protocol="SNMP" sSave="true" timeAvg="false">
  15. <OTHEROIDS/>
  16. </Graph>
  17. </GRAPHDETAILS>
  18. <Category name="Server"/>
  19. <Vendor name="net-snmp"/>
  20. <Version version="2016031804"/>
  21. </CustomDevicePackage>
  22. </CustomDevicePackage>

Noteworthy sections:

SysOID oid=: this is the FreeBSD system identifier. When you’re going to add a new FreeBSD server the template will be automatically attached based on SysOID.

CPU and RAM sections were copied from the standard Linux template.

DisplayColumn=: .1.3.6.1.4.1.2021.9.1.2 is a list of available partitions (/, /usr, /var, etc.).

Index=: .1.3.6.1.4.1.2021.9.1.1 is a list of IDs of available partitions.

oid=: (.1.3.6.1.4.1.2021.9.1.8*100/.1.3.6.1.4.1.2021.9.1.7) is used to calculate the percentage of utilization of a particular partition, where .1.3.6.1.4.1.2021.9.1.8 is used space and .1.3.6.1.4.1.2021.9.1.7 is available space.

Hope it helps.

Unattended installation of CentOS 7 with Kickstart

Sunday, March 13th, 2016

While setting up my first Hadoop cluster I faced with the dilemma of how to perform installations of CentOS 7 on multiple servers at once. If you have 20 data nodes to deploy, anything you chose to automate an installation will greatly reduce the deployment time, but most importantly, it will eliminate the possibility of human error (typo for example).

Initially, I started looking at the disk cloning direction. Since all my data nodes are identical, I was thinking to prepare one data node server, then dd the system drive, place it on a NFS share, boot the server and re-image the system drive using dd image from the share. Clonezilla and DRBL seem to be the perfect pair for a such scenario. And although you will spend some time configuring, testing and tuning it, it was still worth to look into it.

Then I realized that even if I manage to establish the setup above, I’ll still have to deal with manual post-installation tweaks, like regeneration of SSH keys and probably adjusting of MAC addresses. On top of that, to transfer raw dd image (in my case it was ~30GB) might take longer than initial installation itself. Therefore I ended up using Kickstart method. I’m pretty sure there are more efficient solutions and if you happen to know one I’d love to hear your comments.

(more…)

How to configure vLAG on a Brocade VDX 6740T-1G switch to work with SafeNet Network HSM

Tuesday, January 26th, 2016

Caution! I deleted my previous post on how to configure vLAG on Brocade VDX 6740T-1G switch to work with SafeNet Network HSM because actually it didn’t work as it should. If you get a cached version somewhere please disregard it.

I have no idea how I managed to get bonding to operate in round-robin mode on SafeNet Network HSM:

  1. [hsm-node-1] lunash:>network interface bonding show
  2.  
  3. ———————————————————–
  4. Ethernet Channel Bonding Driver: v3.4.0-2 (October 7, 2008)
  5.  
  6. Bonding Mode: load balancing (round-robin)

Because once the appliance was rebooted the bonding mode has changed to active-backup and the whole story with LAGs became irrelevant. The primary interface started flapping again and the only way to stabilize connectivity to HSM was to disable the slave interface.

  1. [hsm-node-1] lunash:>network interface bonding show
  2.  
  3. ———————————————————–
  4. Ethernet Channel Bonding Driver: v3.4.0-2 (October 7, 2008)
  5.  
  6. Bonding Mode: fault-tolerance (active-backup)

So, back to the original subject of the post: how do you configure a LAG on Brocade switch to work with SafeNet Network HSM? The answer is — you don’t. In fault-tolerance bonding mode, when one interface is active and another one is backup (read passive), you don’t create any LAGs on the switch. All you have to do is to bring both interfaces to switchport mode access mode and ensure that VLAN and speed settings are identical. Here is how our switch config looks like:

  1. !
  2. interface TenGigabitEthernet 12/0/2
  3.  speed 1000
  4.  description -=HSM-NODE-1:ETH0=-
  5.  switchport
  6.  switchport mode access
  7.  switchport access vlan 12
  8.  spanning-tree shutdown
  9.  no fabric isl enable
  10.  no fabric trunk enable
  11.  no shutdown
  12. !
  13. interface TenGigabitEthernet 13/0/2
  14.  speed 1000
  15.  description -=HSM-NODE-1:ETH1=-
  16.  switchport
  17.  switchport mode access
  18.  switchport access vlan 12
  19.  spanning-tree shutdown
  20.  no fabric isl enable
  21.  no fabric trunk enable
  22.  no shutdown
  23. !

Now, you certainly lose link aggregation and load balancing functionalities, because only one interface will be passing traffic at a time. The slave interface comes into play only if the primary interface is down. We’re still good though when it comes to redundancy — you can disconnect the cable from ETH0 without any impact on connectivity.

On a HSM side, you don’t have many options so you follow the standard procedure: assign the IP address to the bond (network interface bonding config -ip x.x.x.x -netmask y.y.y.y -gateway z.z.z.z) and bring it up (network interface bonding enable).

To check the status:

  1. [hsm-node-1] lunash:>network interface bonding show
  2.  
  3. ———————————————————–
  4. Ethernet Channel Bonding Driver: v3.4.0-2 (October 7, 2008)
  5.  
  6. Bonding Mode: fault-tolerance (active-backup)
  7. Primary Slave: eth0 (primary_reselect failure)
  8. Currently Active Slave: eth1
  9. MII Status: up
  10. MII Polling Interval (ms): 100
  11. Up Delay (ms): 2000
  12. Down Delay (ms): 0
  13.  
  14. Slave Interface: eth0
  15. MII Status: up
  16. Speed: 1000 Mbps
  17. Duplex: full
  18. Link Failure Count: 0
  19. Permanent HW addr: 00:15:c4:n7:13:06
  20.  
  21. Slave Interface: eth1
  22. MII Status: up
  23. Speed: 1000 Mbps
  24. Duplex: full
  25. Link Failure Count: 0
  26. Permanent HW addr: 00:15:c4:n7:6a:34
  27. ———————————————————–
  28. ———————————————————–
  29. Status for eth0:
  30.         Link detected: yes
  31.  
  32. Status for eth1:
  33.         Link detected: yes
  34. ———————————————————–
  35.  
  36. Command Result : 0 (Success)
  1. [hsm-node-1] lunash:>status interface
  2.  
  3. bond0     Link encap:Ethernet  HWaddr 00:15:C4:N7:13:06
  4.           inet addr:192.168.100.42  Bcast:192.168.100.255  Mask:255.255.255.0
  5.           UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
  6.           RX packets:13479 errors:0 dropped:0 overruns:0 frame:0
  7.           TX packets:3183 errors:0 dropped:0 overruns:0 carrier:0
  8.           collisions:0 txqueuelen:0
  9.           RX bytes:1059045 (1.0 MiB)  TX bytes:446623 (436.1 KiB)
  10.  
  11. eth0      Link encap:Ethernet  HWaddr 00:15:C4:N7:13:06
  12.           UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
  13.           RX packets:12670 errors:0 dropped:0 overruns:0 frame:0
  14.           TX packets:2082 errors:0 dropped:0 overruns:0 carrier:0
  15.           collisions:0 txqueuelen:1000
  16.           RX bytes:996811 (973.4 KiB)  TX bytes:300205 (293.1 KiB)
  17.           Interrupt:58 Memory:fb4c0000-fb4e0000
  18.  
  19. eth1      Link encap:Ethernet  HWaddr 00:15:C4:N7:6A:34
  20.           UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
  21.           RX packets:809 errors:0 dropped:0 overruns:0 frame:0
  22.           TX packets:1101 errors:0 dropped:0 overruns:0 carrier:0
  23.           collisions:0 txqueuelen:1000
  24.           RX bytes:62234 (60.7 KiB)  TX bytes:146418 (142.9 KiB)
  25.           Interrupt:169 Memory:fb6e0000-fb700000
  26.  
  27. Command Result : 0 (Success)

How to configure SNMP on a Brocade VDX 6740T-1G switch

Monday, January 25th, 2016

Below is a snippet of the config that worked for me to allow SNMP v1 polling of a Brocade VDX 6740T-1G switch. Nothing fancy, I just wanted to enable read-only, SNMP v1 access to the switch to start capturing the load of the interfaces. Note that the NOS version is 6.0.2.

  1. snmp-server contact "Your network crew"
  2. snmp-server location "DC A"
  3. snmp-server sys-descr "Brocade VDX 6740T-1G"
  4. snmp-server community XXXXX groupname monitor
  5. snmp-server view monitor 1.3.6 included
  6. snmp-server group monitor v1 read monitor

The first three lines are not interesting. The forth and the last one will enable SNMP v1 read-only access. Note that you have to specify a groupname. You can name it whatever you like but it has to be consistent.

Finally, without ‘snmp-server view monitor 1.3.6 included’ line you will be able to poll the switch but no data will be returned. Perhaps it could be useful if you have multiple teams and you want to separate who can monitor what, but since I don’t need it I allowed access to the whole MIB.

How to add a license to a Brocade VDX6740T-1G switch

Sunday, January 24th, 2016

In order to license a particular feature on a Brocade VDX 6740T-1G switch you’ll need:

  • transaction key (22 characters long string received from your Brocade supplier, which is bound to a particular feature, for example BR-VDX6740T-1G-16X10G-COD (to add 16x10GB Capacity on Demand feature) or BR-VDX6740-2X40G-POD (to unlock two remaining QSFP ports));
  • access to the Brocade portal (Software Licensing section);
  • license ID of the switch where the license is going to be attached to.

To get a license ID, log in to the switch and run:

  1. show license id rbridge-id 12
  2.  
  3. ===================================================
  4.   12                    XX:XX:XX:XX:XX:XX:XX:XX

Since all my VDXs are in a VCS Logical Chassis mode, I have to specify the rbridge-id of the member.

Login to the Brocade portal, go to Software Licensing and enter the transaction key. On the next page you’ll be prompted for an email address and the license ID.

Once generated, you’ll receive a XML file with the long string between licKey tags.

Copy it (omit licKey tags) and execute on the switch:

  1. license add rbridge-id 12 licStr "XX XXXXXXXX#"

Make sure to place the license inside quotes, since normally there is a space in the license key.

To check whether the license was deployed run:

  1. show license rbridge-id 12
  2.  
  3. rbridge-id: 12
  4. xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
  5.        10G Port Upgrade license
  6.        Feature name:PORT_10G_UPGRADE
  7.        License is valid
  8.        Capacity: 16