Preface
Capturing load balancer traffic flows is not something that is elegantly handled by most commercial applications. Several can’t even gather statistics on the F5 appliances because they lack the ability to index mibs. With the help of some excellent templates found on the Cacti forums, you will be able to successfully graph my virtual servers, interfaces, and memory. This article will walk you through the steps required to install & configure Cacti to begin monitoring the F5 LTM Global Traffic, Virtual Server Traffic, Interface Traffic & Memory.
Category: F5
Hardware platforms
Ever needed to filter certain F5 platforms within your installation scripts? If so, check out the following document at the tech part of f5.com
You’ll see a listing of all the hardware platforms like this:
Platform: C36
Models: BIG-IP 1500, Enterprise Manager 500
Form Factor: 1U
Host Board: Tyan 5112
Processor: Single Celeron 2.5GHz
SSL Card: Yes
Now combine the Platform code with a small bash script like this one:
HW=$($AWK -F= '/^platform/{print $2}' /PLATFORM ) case "$HW" in C62*) HARDWARE=3400;; C36) HARDWARE=1500;; D44) HARDWARE=2400;; *) HARDWARE=UNKNOWN;; esac
And you can tweak your scripts to only support certain hardware.
BigIP LTM : configuring & testing the snmp destinations
Configure
-> system -> snmp -> traps -> destination -> create
Test
[root@bigip:Active] config # logger -p local1.warning "Pool member 127.0.0.1:31337 monitor status down."
Verify
Check your trap receiver on how it interacts with the message(s) by the BigIP
Bigip : multiple virtual services running on different ports connected to one pool
Scenario
– Two application servers
– Each application server hosts different 3 services (different ports) which depend on eachother
Objective
When one of the services on a node goes down (checked by a monitor), then all services should be marked as down.
Possible Solutions
- Solution “divided” : A seperate pool for each service
The way you normally do this, yet it’s not that clean as you make the situation a bit more bloated. - Solution “combined” : One pool for all services
Use the “translate service disable” option when creating a virtual server. This will disable port translation for the specific virtual server.
Example
If the virtual port is 65001, and the port used for the poolmembers is 65101, then when a request is send to the virtual ip on port 65001, then it will be rerouted to the pool member’s port 65101.
If the “translate service” option is set to “disable”, then the request will be sent to the pool member’s port 65001.
In this case you can setup one pool, with different checks for all services the nodes should provide. And create virtual servers pointing to one single pool.
man virtual
translate service
The options are enable or disable. You can turn port translation
off for a virtual server if you want to use the virtual server to
load balance connections to any service.
Bigip : connection mirroring
Scenario
There are two loadbalancers which are setup as a redudant pair which provides a default simple virtual server (with pool).
Let’s say you’d setup a connection towards this virtual server, and afterwards there’dd be a failover.
What happens to the connection that was setup to the load balancer that was setup?
- The connection is being migrated to the other load balancer.
- The connection remains as it was, directed to the “failed” load balancer.
- The connection is terminated/reset (by the BigIP).
It might be a surprise to some, that the correct answer is the second one. The connection remains “as it was”. Off course it won’t be functional; Yet there will be no fail over of this connection by default, or will the connection be terminated/reset by the BigIP
When (and how) the connection will be reset depends solely on the client!
So one might ask: “Why doesn’t the BigIP send a fin/rst to the client?”
Another question might answer this: “How would the BigIP be able to send the fin/rst packet as it failed?” A failover occurs when the unit isn’t accessible anymore (If it got disconnected from the network, crashed… etc). It wouldn’t be able to send this packet.
There is however a mechanism that does a fail over to the other BigIP. But there is a (performance) trade off involved. This mechanism is called “connection mirroring”. So the state of all the connections made to the active BigIP are also kept on the standby BigIP.
HOWTO
You can enable the “connection mirroring” thru :
– the command line (“man virtual”) : “b virtual *name* mirror conn enable”
– the GUI (virtual server -> advanced) : GUI Screenshot
So if you REALLY need it, you can use the option, yet be aware of the performance degradation that’ll cause.
F5 BigIP version 9 & Cacti
If you’ve got a hard time finding a cacti profile for the F5 Bigip version 9, then checkout the Cacti Forum. Don’t follow thru on the regular template links, as you’ll end up with templates that won’t work.
Bigip: tmm process
What is TMM
When you’ve been working with a BigIP loadbalancer, you may have encountered the tmm process and wondered what it is… TMM stands for “Traffic Management Microkernel”. In basis it handles all of the BigIP’s traffic functionality. This being the loadbalancning, SSL processing, compression, iRules, …
For more detailed tech info, check the following page: http://www.f5.com/solutions/technology/tmos-dev_wp.html
CPU Utilization
What about it’s CPU Utilization? You see it use a lot of cpu time “from time to time”, yet you’re wondering how the algorithm behind is working.
The TMM process will utilize all the available time from one CPU. In case of a multicpu model, it will use the highest numbered CPU for this. All other processes will utilize the remaining processing time from the other CPUs.
Yet the system is idle (no traffice to process), it will release up to 99% of it’s utilization for other processes. Under load it will only release 20% of available time to other processes (like f.e. the httpd which powers the gui).