About This Club
F5 gets its own club just because there is so much to cover.
- What's new in this club
-
Overview of Persistence Types Reference: https://my.f5.com/manage/s/article/K26898044 Depending on session type, there are several persistence methods to choose from. These are the supported persistence methods in F5 Networks BIG-IP units: Cookie persistence Cookie persistence uses the HTTP cookie header to persist connections across a session. This technique prevents the issues associated with simple persistence because the session ID is unique. Destination address affinity persistence Also known as sticky persistence, destination address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the destination IP address of a packet. Hash persistence Hash persistence allows you to create a persistence hash based on an existing hash persistence profile. Using hash persistence is the same as using universal persistence, except that with hash persistence, the resulting persistence key is a hash of the data, rather than the data itself. A hash value may be created based on source IP, destination IP, and destination port. While not necessarily unique to every session, this technique results in a more even distribution of load across servers. You cannot associate hash persistence with a virtual server that is managing Fast L4 traffic; use of hash persistence for Fast L4 traffic is disallowed. Host persistence Host persistence allows the BIG-IP system to use the HTTP Host header passed in an HTTP request to determine which pool member to choose. You can also activate host persistence from within an iRule. Microsoft Remote Desktop Protocol persistence Microsoft Remote Desktop Protocol (MSRDP) persistence tracks sessions between clients and servers running the Microsoft Remote Desktop Protocol (RDP) service. SIP persistence SIP persistence is an application-specific type of persistence used for servers that receive Session Initiation Protocol (SIP) messages sent through UDP, SCTP, or TCP. You generally use this persistence technique with stateful applications that depend on the client being connected to the same application instance throughout the life of the session. Source address affinity persistence Also known as simple persistence, source address affinity persistence supports TCP and UDP protocols, and directs session requests to the same server based solely on the source IP address of a packet. SSL persistence Because SSL sessions need to be established and are very much tied to a session between client and server, failing to persist SSL-secured sessions results in renegotiation of the session. BIG-IP system uses the SSL session ID to ensure that a session is properly routed to the application instance to which the session first connected. Even when the client's IP address changes, the BIG-IP system still recognizes the connection as being persistent based on the session ID. Universal persistence Universal persistence uses any piece of data (network, application protocol, payload) to persist a session. This technique requires the BIG-IP system to be able to inspect and ultimately extract any piece of data from a request or response. With universal persistence, you can write an expression that defines the data that the BIG-IP system will persist on in a packet. Cookie Persistence Reference: https://my.f5.com/manage/s/article/K6917 When you configure a cookie persistence profile to use the HTTP Cookie Insert or HTTP Cookie Rewrite method, the BIG-IP system inserts a cookie into the HTTP response, which well-behaved clients include in subsequent HTTP requests for the host name until the cookie expires. The cookie, by default, is named BIGipServer<pool_name>. The cookie is set to expire based on the expiration setting configured in the persistence profile. The cookie value contains the encoded IP address and port of the destination server. Reference: https://my.f5.com/manage/s/article/K83419154
-
On your non-F5 that you plan on storing the files run this command with no passphrase (this is a RHEL 7 box for me) ssh-keygen -t rsa Now copy the public key to the F5 ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigip.fqdn Create a .txt file with all your F5 devices listed in it using FQDN that is resolvable (only one per line) vi f5_devices.txt Create a file on the linux box that you will be backing up configs to vi bigip_backup.sh Copy the below and paste in that new file #!/bin/sh ## PRE_REQ ## ssh-copy-id -i ~/.ssh/id_rsa.pub root@bigip ## TEST-VERIFY: ssh root@bigip <--no password login means success ###### SYNTAX to RUN: ./bigip_backup.sh f5_devices.txt [daily|weekly] cat $1 | while read REMOTE_BIGIP || [[ -n $REMOTE_BIGIP ]]; do start=$SECONDS echo "STARTING with $REMOTE_BIGIP" DATETIME="`date +%Y%m%d_%H%M`" REMOTE_PATH='/var/tmp' LOCAL_PATH="/home/confback/backups/f5" FILE_UCS="$(echo f5_daily_backup_$REMOTE_BIGIP | cut -d'.' -f1)-${DATETIME}.ucs" FILE_SCF="$(echo f5_daily_backup_$REMOTE_BIGIP | cut -d'.' -f1)-${DATETIME}.scf" FILE_CERT="$(echo f5_daily_backup_$REMOTE_BIGIP | cut -d'.' -f1)-${DATETIME}.cert.tar" start=$SECONDS if [ $# -eq 0 ]; then echo "$0: Missing BIGIP FQDN - Try Running again: .bigip_backup.sh f5_devices.txt" exit 1 elif [ $# -gt 2 ]; then echo "$0: Too many arguments: $@" exit 1 else echo "==================================================================" echo "filename........: $1" echo "REMOTE_BIGIP....: $REMOTE_BIGIP" echo "DATETIME........: $DATETIME" echo "REMOTE_PATH.....: $REMOTE_PATH" echo "LOCAL_PATH......: $LOCAL_PATH" echo "FILE_UCS........: $FILE_UCS" echo "FILE_SCF........: $FILE_SCF" echo "FILE_CERT.......: $FILE_CERT" echo "==================================================================" echo "Variable are SET" echo "" fi #DAILY echo "Do we have a UCS backup from today? Checking..." echo "" ssh -n $REMOTE_BIGIP find $REMOTE_PATH/f5_daily_backup_*.ucs -mtime -1 -ls > /dev/null if [ $? -eq 0 ]; then echo "$0: UCS exists so let's just download it" else; then echo "`date +%Y%m%d_%H.%M.%S`: saving config" ssh -n $REMOTE_BIGIP tmsh save /sys config > /dev/null echo "`date +%Y%m%d_%H.%M.%S`: creating UCS backup" ssh -n $REMOTE_BIGIP tmsh save /sys ucs $REMOTE_PATH/$FILE_UCS > /dev/null echo "....done with UCS...." fi # echo "`date +%Y%m%d_%H.%M.%S`: copy UCS backup" # scp -v $REMOTE_BIGIP:$REMOTE_PATH/f5_daily_backup_*.ucs $LOCAL_PATH/ > /dev/null # echo "`date +%Y%m%d_%H.%M.%S`: remove UCS backup to save room" # ssh -n $REMOTE_BIGIP rm -f $REMOTE_PATH/f5_daily_backup_*.ucs # echo "....done with UCS...." echo "" #WEEKLY (roughly 12min per device to backup) #echo "`date +%Y%m%d_%H.%M.%S`: creating SCF file" #ssh -n $REMOTE_BIGIP tmsh save /sys config file $REMOTE_PATH/$FILE_SCF no-passphrase > /dev/null #echo "`date +%Y%m%d_%H.%M.%S`: copying SCF file" #scp $REMOTE_BIGIP:$REMOTE_PATH/f5_daily_backup_*.scf* $LOCAL_PATH/ #echo "`date +%Y%m%d_%H.%M.%S`: remove SCF file(s) to save room" #ssh -n $REMOTE_BIGIP rm -f $REMOTE_PATH/f5_daily_backup_*.scf* #echo "....done with SCF...." #echo "" #echo "`date +%Y%m%d_%H.%M.%S`: compressing SSL CERTs" #ssh -n $REMOTE_BIGIP tar -cf "${REMOTE_PATH}/${FILE_CERT}" /config/ssl #echo "`date +%Y%m%d_%H.%M.%S`: copying CERT compressed file" #scp $REMOTE_BIGIP:$REMOTE_PATH/f5_daily_backup_*.cert.tar $LOCAL_PATH/ #echo "`date +%Y%m%d_%H.%M.%S`: remove CERT file to save room" #ssh -n $REMOTE_BIGIP rm -f $REMOTE_PATH/f5_daily_backup_*.cert* #echo "....done with CERT...." #echo "" #GENERAL echo "get $REMOTE_BIGIP:$REMOTE_PATH" | sftp $REMOTE_BIGIP:$REMOTE_PATH/f5_daily_backup_*.* $LOCAL_PATH <<EOF EOF echo "`date +%Y%m%d_%H.%M.%S`: time to cleanup created files and rpm-tmp files" ssh -n $REMOTE_BIGIP rm -f $REMOTE_PATH/{rpm-tmp.*,f5_daily_backup_*.*} echo "" echo "FINISHED with $REMOTE_BIGIP now exiting" duration=$(( SECONDS - start )) echo "Duration(seconds): $duration" echo "Duration(minutes): $(( $duration / 60))" echo "*************************************************************************" echo " " done echo "Cleaning up any backup files older than 30 days on RHEL storage" /usr/bin/find $LOCAL_PATH -type f -mtime +31 -exec rm -f {} \; Hope this helps someone out.. I know it worked great for our application
-
Having issues trying to pass the password in the copy to remote file part so trying the python way like shown here Create file on your remote server vi f5-backup.py Copy and paste the following #! /usr/bin/env python # -*- coding: utf-8 -*- import os import json import datetime import requests import getpass import optparse import sys import hashlib from urllib3.exceptions import InsecureRequestWarning # Root CA for SSL verification ROOTCA = '' CHECKSUM = '' HOSTNAME = '' STATUS = False # credential Ask for user Active Directory authentication information # with a verification of entered password def credential(): #User name capture user = input('Enter Active Directory Username: ') # start infinite loop while True: # Capture password without echoing pwd1 = getpass.getpass('%s, enter your password: ' % user) pwd2 = getpass.getpass('%s, re-Enter Password: ' % user) # Compare the two entered password to avoid typo error if pwd1 == pwd2: # break infinite loop by returning value return user, pwd1 # get_token() will call F5 Big-ip API with username and password to obtain an authentication # security token def get_token(session): # Build URL URL_AUTH = 'https://%s/mgmt/shared/authn/login' % HOSTNAME # Request user credential username, password = credential() # prepare payload for request payload = {} payload['username'] = username payload['password'] = password payload['loginProviderName'] = 'tmos' # set authentication to username and password to obtain the security authentication token session.auth = (username, password) # send request and handle connectivity error with try/except try: resp = session.post(URL_AUTH, json.dumps(payload)).json() except: print("Error sending request to F5 big-ip. Check your hostname or network connection") exit(1) # filter key in response. if 'code' key present, answer was not a 200 and error message with code is printed. for k in resp.keys(): if k == 'code': print('security authentication token creation failure. Error: %s, Message: %s' % (resp['code'],resp['message'])) exit(1) # Print a successful message log and return the generated token print('Security authentication token for user %s was successfully created' % resp['token']['userName']) return resp['token']['token'] # create_ucs will call F5 Big-ip API with security token authentication to create a timestamps ucs backup # file of the F5 Big-ip device configuration def create_ucs(session): URL_UCS = 'https://%s/mgmt/tm/sys/ucs' % HOSTNAME # generate a timestamp file name ucs_filename = HOSTNAME + '_' + datetime.datetime.now().strftime('%Y-%m-%d-%H%M%S') + '.ucs' # prepare the http request payload payload = {} payload['command'] = 'save' payload['name'] = ucs_filename # send request and handle connectivity error with try/except try: resp = session.post(URL_UCS, json.dumps(payload)).json() except: print("Error sending request to F5 big-ip. Check your hostname or network connection") exit(1) # filter key in response. if 'code' key present, answer was not a 200 and error message with code is printed. for k in resp.keys(): if k == 'code': print('UCS backup creation failure. Error: %s, Message: %s' % (resp['code'],resp['message'])) exit(1) # Print a successful message log print("UCS backup of file %s on host %s successfully completed" % (resp['name'], HOSTNAME)) return ucs_filename, checksum(session, ucs_filename) def checksum(session, filename): URL_BASH = 'https://%s/mgmt/tm/util/bash' % HOSTNAME # prepare the http request payload payload = {} payload['command'] = 'run' payload['utilCmdArgs'] = '''-c "sha256sum /var/local/ucs/%s"''' % filename # send request and handle connectivity error with try/except try: resp = session.post(URL_BASH, json.dumps(payload)).json()['commandResult'] except: print("Error sending request to F5 big-ip. Check your hostname or network connection") exit(1) checksum = resp.split() return checksum[0] # delete_ucs will call F5 Big-ip API with security token authentication to delete the ucs backup # file after local download def delete_ucs(session, ucs_filename): URL_BASH = 'https://%s/mgmt/tm/util/bash' % HOSTNAME # prepare the http request payload payload = {} payload['command'] = 'run' payload['utilCmdArgs'] = '''-c "rm -f /var/local/ucs/%s"''' % ucs_filename # send request and handle connectivity error with try/except try: session.post(URL_BASH, json.dumps(payload)).json() except: print("Error sending request to F5 big-ip. Check your hostname or network connection") exit(1) def ucsDownload(ucs_filename, token): global STATUS # Build request URL URL_DOWNLOAD = 'https://%s/mgmt/shared/file-transfer/ucs-downloads/' % HOSTNAME # Define chunck size for UCS backup file chunk_size = 512 * 1024 # Define specific request headers headers = { 'Content-Type': 'application/octet-stream', 'X-F5-Auth-Token': token } # set filename and uri for request filename = os.path.basename(ucs_filename) uri = '%s%s' % (URL_DOWNLOAD, filename) requests.packages with open(ucs_filename, 'wb') as f: start = 0 end = chunk_size - 1 size = 0 current_bytes = 0 while True: content_range = "%s-%s/%s" % (start, end, size) headers['Content-Range'] = content_range #print headers resp = requests.get(uri, headers=headers, verify=False, stream=True) if resp.status_code == 200: # If the size is zero, then this is the first time through the # loop and we don't want to write data because we haven't yet # figured out the total size of the file. if size > 0: current_bytes += chunk_size for chunk in resp.iter_content(chunk_size): f.write(chunk) # Once we've downloaded the entire file, we can break out of # the loop if end == size: break crange = resp.headers['Content-Range'] # Determine the total number of bytes to read if size == 0: size = int(crange.split('/')[-1]) - 1 # If the file is smaller than the chunk size, BIG-IP will # return an HTTP 400. So adjust the chunk_size down to the # total file size... if chunk_size > size: end = size # ...and pass on the rest of the code continue start += chunk_size if (current_bytes + chunk_size) > size: end = size else: end = start + chunk_size - 1 if sha256_checksum(ucs_filename) == CHECKSUM: STATUS = True def sha256_checksum(filename, block_size=65536): sha256 = hashlib.sha256() with open(filename, 'rb') as f: for block in iter(lambda: f.read(block_size), b''): sha256.update(block) return sha256.hexdigest() def f5Backup(hostname): global STATUS, CHECKSUM,HOSTNAME counter = 0 HOSTNAME = hostname # Disable SSL warning for Insecure request requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning) # create a new https session session = requests.Session() # update session header session.headers.update({'Content-Type': 'application/json'}) # Disable TLS cert verification if ROOTCA == '': session.verify = False else: session.verify = ROOTCA # set default request timeout session.timeout = '30' # get a new authentication security token from F5 print('Start remote backup F5 big-Ip device %s ' % HOSTNAME) token = get_token(session) # disable username, password authentication and replace by security token # authentication in the session header session.auth = None session.headers.update({'X-F5-Auth-Token': token}) # create a new F5 big-ip backup file on the F5 device print('Creation UCS backup file on F5 device %s' % HOSTNAME) ucs_filename, CHECKSUM = create_ucs(session) # locally download the created ucs backup file #download_ucs(session, ucs_filename) while not STATUS: print("Download file %s attempt %s" % (ucs_filename, counter+1)) ucsDownload(ucs_filename, token) counter+=1 if counter >2: print('UCS backup download failure. inconscistent' \ 'checksum between origin and destination') print('program will exit and ucs file will not be deleted from F5 device') exit(1) print('UCS backup checksum verification successful') # delete the ucs file from f5 after local download # to keep f5 disk space clean delete_ucs(session, ucs_filename) if __name__ == "__main__": # Define a new argument parser parser=optparse.OptionParser() # import options parser.add_option('--hostname', help='Pass the F5 Big-ip hostname') # Parse arguments (opts,args) = parser.parse_args() # Check if --hostname argument populated or not if not opts.hostname: print('--hostname argument is required.') exit(1) f5Backup(opts.hostname) Save the file then run it with the following syntax python3 f5-backup.py --hostname <fqdn_f5_appliance>
-
Here is a script that may help you backup your F5 to a remote server on a regular basis when you don't want to use the F5 tool BIG-IQ Create file vi /var/tmp/script_backup.sh Make file executable chmod 755 /var/tmp/script_backup.sh Copy and Paste the following to the new file TFTP_SERVER=10.0.0.0 DATETIME="`date +%Y%m%d%H%M`" OUT_DIR='/var/tmp' FILE_UCS="f5_lan_${HOSTNAME}.ucs" FILE_SCF="f5_lan_${HOSTNAME}.scf" FILE_CERT="f5_lan_${HOSTNAME}.cert.tar" cd ${OUT_DIR} tmsh save /sys ucs "${OUT_DIR}/${FILE_UCS}" tmsh save /sys config file "${OUT_DIR}/${FILE_SCF}" no-passphrase tar -cf "${OUT_DIR}/${FILE_CERT}" /config/ssl tftp $TFTP_SERVER <<-END 1>&2 mode binary put ${FILE_UCS} put ${FILE_SCF} put ${FILE_CERT} quit END rm -f "${FILE_UCS}" rm -f "${FILE_SCF}" rm -f "${FILE_CERT}" rm -f "${FILE_SCF}.tar" RTN_CODE=$? exit $RTN_COD Once your script runs successfully go ahead and add it to your crontab so it runs on a regular basis crontab -e 30 0 * * 6 /var/tmp/script_backup.sh Now what I would like to do is..... Have a script on my remote server that would run with a cronjob and this script would: connect to BIG-IP copy script up run script to create files copy down files to server storing files cleanup files Go to next BIG-IP in list
-
This is very awesome and saved me a ton of time since the GUI you can only export one at a time which exports into XML just like this does. Now the real question is.... Do you have a script to import all the xml files?
-
Here is a bash script that will export all your policies as XML file which is the same as if you went one by one via the GUI Create file on your machine (I've tested from my Macbook as well as a RedHat server successfully) vi exportASMpolicies.sh Then copy and paste the following !/bin/bash envir=SAT1 echo "environment:$envir" ltm=10.46.48.13 echo "ltm:$ltm" user=admin echo "user:$user" pass=SuP3rC00L echo "password:$pass" curl -ku $user:$pass -X GET https://$ltm/mgmt/tm/asm/policies | jq '.items[] | "pol_name:" + .name + ";api_id:" + .id' >> asmDetails$envir.txt cat asmDetails$envir.txt |grep pol_name |cut -d":" -f2 |cut -d";" -f1 >> asmPolicies$envir.txt cat asmDetails$envir.txt |grep pol_name |cut -d":" -f3 |cut -d'"' -f1 >> asmIDs$envir.txt folderName="$(zdump AEST)" mkdir -p asm"$envir"Backup mkdir "asm"$envir"Backup/""$folderName" paste -d'\n' asmPolicies"$envir".txt asmIDs"$envir".txt | while read asmPolicy && read asmIDs;do echo $asmPolicy $asmIDs curl -ku $user:$pass -X POST https://"$ltm"/mgmt/tm/asm/tasks/export-policy -H 'Content-Type: application/json' -d '{"filename":"'$asmPolicy'","policyReference":{"link":"https://localhost/mgmt/tm/asm/policies/'$asmIDs'"}}' curl -ku $user:$pass -X GET https://"$ltm"/mgmt/tm/asm/file-transfer/downloads/$asmPolicy >> asm"$envir"Backup/"$folderName"/$asmPolicy.xml done rm asmDetails"$envir".txt rm asmPolicies"$envir".txt rm asmIDs"$envir".txt Don't forget to make the file executable chmod +x exportASMpolicies.sh Then finally run the command and you should get a folder output ./exportASMpolicies.sh
- 1 reply
-
- 1
-
-
The best way to troubleshoot this is to run qkview in verbose so you can identify where the qkview is getting hung up on. If qkview is currently hung press CTRL + C to break out of it. Now when running qkview run it with a -v like this qkview -s0 -v You will get an output similar to this Executing Module: [qknsyncd.so] Module [qknsyncd.so] execution time: 0.015643 Executing Module: [qkcloud.so] Module [qkcloud.so] execution time: 0 Executing Module: [qkafm.so] Module [qkafm.so] execution time: 0 Executing Module: [qkasm.so] Module [qkasm.so] execution time: 0 Executing Module: [qkfips.so] Module [qkfips.so] execution time: 0 Executing Module: [sccp_aom.so] Module [sccp_aom.so] execution time: 0 Executing Module: [qkshared.so] qkshared.so is where the qkview for me stopped for more than 5min and would go no further. (NOTE: qkview is not considered hung until it doesn't move past a certain point after 5min, for example the qkshared.so it never went past this point for much more then 5min) So I broke out of qkview hang up by typing CTRL + C and then I ran the following tmsh restart /sys service statsd After the command completes you can run tmsh show /sys memory Then try and run your qkview command again and hopefully this time it works.
-
The ability to utilize WideIPs on the GTM you will need communication from the GTM to the LTM(s) over port 4353 which is used by iQuery to connect to the Interconnect IP (SelfIP). As an example I want to add an F5 LTM with a SelfIP of 10.47.195.229 FIRST verify no FW is blocking iQuery port 4353 [cowboy@usfnt1slbgtm06:Active:Standalone] ~ # nc -v 10.47.195.229 4353 Ncat: Version 6.40 ( http://nmap.org/ncat ) Ncat: Connected to 10.47.195.229:4353. This confirms no Firewall would block iQuery connections so let's continue. If you already have the LTM added as a server, let's check to see the iQuery status on the GTM for that LTM tmsh show /gtm iquery all -------------------------------------------------------- Gtm::IQuery: 10.47.195.229 -------------------------------------------------------- Server usfnt1slbdv27.hosangit.corp Server Type unknown Data Center San Antonio Connection Time None State not-connected Connection ID 0 Reconnects 119 Backlogs 0 Bits In 0 Bits Out 0 Bytes Dropped 5.5K Cert Expiration Date 02/26/29 12:41:53 Configuration Time None Configuration Commit ID 0 Configuration Commit Originator --- Local TMOS version 15.1.7 Remote TMOS version --- Local big3d version 15.1.7.0.0.6 Remote big3d version --- Cipher Name --- Cipher Bits 0 Cipher Protocol --- It's not connected so let's dive deeper by reviewing logs On the iGTM and tail the gtm log (tailf /var/log/gtm) you’ll get May 4 07:10:31 txsat1slbgtm06 iqmgmt_ssl_connect: IP: 10.47.195.229 SSL error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed May 4 07:10:31 txsat1slbgtm06 iqmgmt_ssl_connect: IP: 10.47.195.230 SSL error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed To fix the cert was out of sync with whats on the LTM and whats on the GTM so you just need to resync by running [root@usfnt1slbgtm06:Active:Standalone] config # tmsh root@(usfnt1slbgtm06)(cfg-sync Standalone)(Active)(/Common)(tmos)# run gtm bigip_add 10.47.195.229 Retrieving remote and installing local BIG-IP's SSL certs ... Enter root password for 10.47.195.229 if prompted The authenticity of host '10.47.195.229 (10.47.195.229)' can't be established. RSA key fingerprint is SHA256:3zjksJDFVYbwd4RWXPjpIlNKMC6zi4SMxDCJuCnF8GI. RSA key fingerprint is MD5:06:2d:a6:e5:4f:b7:73:4c:db:70:72:60:4e:6a:8e:77. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.47.195.229' (RSA) to the list of known hosts. Password: ==> Done <== Now everything is connected
-
Attached is the JSON you can import into your Postman You will still need to add some environment variables to include: rseries_appliance1_ip rseries_appliance1_name userName password x-auth-token_rseries_appliance1 taskid Tenant_Image Tasks include: Get Token Appliance Mode Show Appliance Mode Change Appliance Mode F5OS Image Show F5OS Images Upload F5OS to rSeries Upload Image Status F5OS Backup/Restore Backup DB Backup DB MOVE Backup DB MOVE Status RESET rSeries to Factory Restore Reboot System Restore Update admin/admin password Restore Upload backup DB File Restore UPLOAD status Restore F5OS DB F5OS VLANs Get VLAN List Config VLANs F5OS NetworkConfig Get PortGroup Config Config Portgroup Get Network Interfaces Get Network LAGs DETAILED Config LAG Get Network LAGs F5OS DNS Get DNS Config Config DNS F5OS NTP Get NTP Config Get Clock Config Config NTP F5OS Logging Get Remote Logging Config Config Logging Get SSH-CLI Timeouts Config SSH-CLI Timeouts Config Token LIfetime Get Licensing F5OS SNMP List SNMP-Allowed IPs Config SNMP-Allowed IPs F5OS Certs List Certs-Keys-CSRs-CAs Upload certificate, key and passphrase F5OS Tenant List Tenant Images Delete Tenant Images UPload Tenant Images Upload Tenant Image Status List Tenants Create Tenant Delete Tenant Resize Tenant Tenant DEPLOY Validate Tenant Status F5OS.postman_collection.json
-
Management Interface tmsh list /sys management-ip tmsh list /sys management-route tmsh show sys mac-address | grep -i mgmt 00:94:a1:ec:06:02 net interface mgmt mac-address ip addr show mgmt 5: mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:94:a1:ec:06:02 brd ff:ff:ff:ff:ff:ff inet 10.44.136.105/23 brd 10.44.137.255 scope global mgmt valid_lft forever preferred_lft forever inet6 fe80::294:a1ff:feec:602/64 scope link valid_lft forever preferred_lft forever ip a | grep -A 2 mgmt 3: eth0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc mq master mgmt state UP qlen 1000 link/ether 00:94:a1:ec:06:02 brd ff:ff:ff:ff:ff:ff inet6 fe80::294:a1ff:feec:602/64 scope link -- 5: mgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000 link/ether 00:94:a1:ec:06:02 brd ff:ff:ff:ff:ff:ff inet 10.44.136.105/23 brd 10.44.137.255 scope global mgmt valid_lft forever preferred_lft forever inet6 fe80::294:a1ff:feec:602/64 scope link tmsh list net interface mgmt net interface mgmt { if-index 32 mac-address 00:94:a1:ec:06:02 media-active 1000T-FD media-max 1000T-FD } Disable MGMT interface tmsh modify net interface mgmt disabled Clear MGMT interface status tmsh reset-stats net interface mgmt Enable MGMT interface tmsh modify net interface mgmt enabled ARPING arping -I mgmt 10.44.136.1 tmsh show net arp view all routes on system tmsh show /net route view all static routes tmsh list /net route tcpdump on mgmt interface tcpdump -s0 -nnni mgmt -vvv capture tcpdump into a pcap file tcpdump -s0 -nnni mgmt -vvv -w /var/tmp/mgmt-interface_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap Check out the media of the mgmt interface tmsh list /net interface mgmt media-active net interface mgmt { media-active 1000T-FD } change from auto speed and auto duplex to 100Mb Full Duplex tmsh modify net interface mgmt media 100TX-FD Reconfigure IP address for management IP and default gateway tmsh create /sys management-ip 10.44.136.105/23 tmsh modify /sys management-route default gateway 10.44.136.1
-
In reference to clearing cache SHOW ALL RECORDS in CACHE tmsh show ltm dns cache records rrset cache non-wideip-transparent-cache DELETE ALL RECORDS in CACHE tmsh delete ltm dns cache records rrset cache non-wideip-transparent-cache SHOW SPECIFIC RECORD in CACHE tmsh show ltm dns cache records rrset cache non-wideip-transparent-cache owner fqdn.example.com DELETE SPECIFIC RECORD in CACHE tmsh delete ltm dns cache records rrset cache non-wideip-transparent-cache owner fqdn.example.com NOTE: DNS server will cache a response only for as long as the TTL in the response. So setting a Maximum TTL in the Cache settings overrides any TTL response longer than 1 day. Standard TTL response is typically 3600 seconds so this shouldn’t be an issue. EXTRA USEFUL SHOW COMMANDS tmsh show ltm dns cache records key cache non-wideip-transparent-cache tmsh show ltm dns cache records nameserver zone-name corp cache non-wideip-transparent-cache EXTRA USEFUL DELETE COMMANDS tmsh delete ltm dns cache records <rrset|msg|nameserver|key> <property> <property value> <DNS Cache name> tmsh delete ltm dns cache records rrset owner example.com cache exampleCache
-
For this to work you need to decrypt the traffic as it comes in. Its too late if you did a capture and all the traffic is encrypted. So this entry is for those of you that would like to do some work ahead of time on the F5 and then have the user do some application testing while you are running a tcpdump. In many cases for me, I have only needed to do this on our DMZ LTM which is where the our F5 works as an SSL Bridge SETUP Put the source IPs in a txt file. I'm calling mine /var/tmp/app1_dg_nonprod_address.txt Create a datagroup tmsh create /sys file data-group dg.app1.nonprod separator ":=" source-path file:/var/tmp/app1_dg_nonprod_address.txt type ip Create iRule and reference datagroup ## irule.ssl.decrypt.app1.nonprod when CLIENTSSL_HANDSHAKE { if {[class match [getfield [IP::client_addr] "%" 1] equals dg.app1.nonprod] } { log local0. "CLIENT_Side_IP:TCP source port: [IP::client_addr]:[TCP::remote_port]" log local0. "CLIENT_RANDOM [SSL::clientrandom] [SSL::sessionsecret]" log local0. "RSA Session-ID:[SSL::sessionid] Master-Key:[SSL::sessionsecret]" } } when SERVERSSL_HANDSHAKE { if {[class match [getfield [IP::client_addr] "%" 1] equals dg.app1.nonprod] } { log local0. "CLIENT_Side_IP:TCP source port: [IP::client_addr]:[TCP::remote_port]" log local0. "CLIENT_RANDOM [SSL::clientrandom] [SSL::sessionsecret]" log local0. "RSA Session-ID:[SSL::sessionid] Master-Key:[SSL::sessionsecret]" } } Add iRule to Virtual Server you want to capture traffic on Start Capture via cli where the iRule is tcpdump -ni 0.0:nnn -s0 --f5 ssl host 198.200.19.151 or host 10.46.69.31 -w /var/tmp/app1-ext.hosangit.com_tcpdump_VS_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap Start Capture via cli on downstream F5 (optional) tcpdump -ni 0.0:nnn -s0 --f5 ssl host 10.46.69.31 or host 10.46.126.197 or host 10.46.126.242 or host 10.46.126.253 -w /var/tmp/app1-int.hosangit.com_tcpdump_VS_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap BEGIN testing application to reproduce the error, once error occurs STOP captures by issuing a CTRL + C Download .pcap file(s) Get those secrets off the F5 that you have the iRule running sed -e 's/^.*\(RSA Session-ID\)/\1/;tx;d;:x' /var/log/ltm > /var/tmp/app1-ext.hosangit.com-sessionsecrets_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pms Download the sessionsecrets (.pms file) example: /var/tmp/appi-ext.hosangit.com-sessionsecrets_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pms
-
REFERENCE: https://support.f5.com/csp/article/K63534030 Please run validation scripts using following steps: Download Scripts Follows the steps under Step 1: Download and Prepare Scripts under the Recommended Actions section. Validate each of the following services by running the downloaded diagnostic scripts on a command shell: Access/APM service: sh RepairService.sh access -validate > ValidationOutput.txt IPSec service: sh RepairService.sh ipsec -validate > ValidationOutput.txt DoS Protection service: sh RepairService.sh dos -validate > ValidationOutput.txt Web Application Security (ASM) service: sh RepairService.sh asm -validate > ValidationOutput.txt Network Security (AFM) service: sh RepairService.sh afm -validate > ValidationOutput.txt Fraud Protection (WebSafe) service: sh RepairService.sh websafe -validate > ValidationOutput.txt Search for errors in the output: cat ValidationOutput.txt | grep "ERROR" If above command prints ERROR in the output then you are impacted by this issue and need to run the repair process under the Recommended Actions section. ElasticsearchTools-v1.5.tar.gz Recommended Actions Step 1: Download and Prepare Scripts Find the scripts and deploy as follows on the BIG-IQ CM device: Go to the F5 Downloads portal. Select Find a Download > BIG-IQ Centralized Management > 8.1.0 > Utilities. Download the ElasticsearchTools-vxxxxx.tar.gz file and transfer to the /shared/tmp directory on the BIG-IQ CM device. For information on transferring files to the BIG-IQ CM device, refer to K175: Transferring files to or from an F5 system. Extract the ElasticsearchTools.tar.gz on the BIG-IQ CM device: cd /shared/tmp tar -xzvf ElasticsearchTools-vxxxxx.tar.gz cd /shared/tmp/ElasticsearchTools-vxxxxx/ Step 2: Repair Services Repair the impacted service(s) and its Elasticsearch indices by running below repair process on any node in Elasticsearch cluster. Note: It needs to be run only once from one node, unless explicitly stated. Repairing Access (APM) service: Note: Repeat step A/B for each node in cluster, but C/D only needs to be done once from any node in cluster. A. Backup Access files in system folder on device: mkdir /var/config/rest/access/config/scripts/elasticsearch/backup/ cp /var/config/rest/access/config/scripts/elasticsearch/*.painless /var/config/rest/access/config/scripts/elasticsearch/backup/ B. Update Access files in system folder on device: /bin/cp -f *.painless /var/config/rest/access/config/scripts/elasticsearch/ C. Update Access .painless scripts: sh PushAPMScriptsToES.sh D. Update Access template files: sh RepairService.sh access Repairing IPSec service: sh RepairService.sh ipsec Repairing DoS Protection service: sh RepairService.sh dos Repairing Web Application Security (ASM) service: sh RepairService.sh asm Repairing Network Security (AFM) service: sh RepairService.sh afm Repairing Fraud Protection (WebSafe) service: sh RepairService.sh websafe Step 3: Validate Repair Process Please repeat validation found at top of this article. Save output to a new text tile (for example, ValidationOutputAfterRepair.txt). Examine the console output. If you see errors (tag ERROR) then it's possible that you may have encountered transitionary errors (for example, incorrect credentials or intermittent errors). If so, you can try repeating STEP TWO under Recommended Actions (save console output to RepairAttempt2.txt) followed again by validation at the beginning. If errors persist, please use -forcereset at the end of the command line used in STEP TWO under Recommended Actions (save console output to ForceResetAttempt1.txt)and again followed by validation at the top of the page. Step 4: Restart Services Deactivate and reactivate the impacted service/listener for the DCD on BIG-IQ CM GUI. Once done confirm that corresponding statistics show up as expected on BIG-IQ GUI.
-
I utilize an SSH terminal called ZOC and its pretty great and I love the User Command Bar that has all your favorites you can assign to different Session Profiles. So here are my session profiles in ZOC that I use F5_user_BIG-IP (white background/black lettering) F5_root_BIG-IP (transluecent-black background/white lettering) Only difference is how they look.. this helps remind me of what type account I'm logged in as. On both Session Profiles shown above I have 13 Folders of commands I utilize which are labeled Folder: Linux Common Folder: Changes Folder: F5 Common Folder: LTM Folder: GTM Folder: Virtual Folder: BIG-IQ Folder: tcpdump Folder: NET Folder: SSL Folder: user Folder: AUDIT Folder: Logging Below is a list of the commands I use for each folder Folder: Linux Common List files (by size) ls -lShr HDD space (pvs) pvs Folder: Changes GTM: WIPs avail tmsh show /gtm wideip | egrep 'Gtm::WideIp|Availability|Reason' | grep -c 'Availability : available' GTM: WIPs 2 txt tmsh show /gtm wideip | egrep 'Gtm::WideIp|Availability|Reason' > /var/tmp/$(echo $HOSTNAME | cut -d'.' -f1)-$(date +%Y%m%d_%H-%M)wideip.txt GTM: compare files diff -c /var/tmp/*wideipB4.txt /var/tmp/*wideipAFTER.txt GTM: POOLs avail tmsh show /gtm pool | egrep 'Gtm::Pool|Availability|Reason' GTM: POOLs 2 txt tmsh show /gtm pool | egrep 'Gtm::Pool|Availability|Reason' > /var/tmp/$(echo $HOSTNAME | cut -d'.' -f1)-$(date +%Y%m%d_%H-%M)pools.txt GTM: SERVERs avail tmsh show /gtm server all | egrep 'Gtm::Server|Availability|Reason' | grep -c 'Availability : available' GTM: SERVERs 2 txt tmsh show /gtm server all | egrep 'Gtm::Server|Availability|Reason' | grep -c 'Availability : available' > /var/tmp/$(echo $HOSTNAME | cut -d'.' -f1)-$(date +%Y%m%d_%H-%M)servers.txt GTM: iQuery tmsh show /gtm iquery | egrep 'Gtm::IQuery|Server|State' GTM: iQuery 2 txt tmsh show /gtm iquery | egrep 'Gtm::IQuery|Server|State' > /var/tmp/$(echo $HOSTNAME | cut -d'.' -f1)-$(date +%Y%m%d_%H-%M)iQuery.txt GTM: DataCenter tmsh show gtm datacenter all | egrep 'Gtm::|Datacenter|Availability|State|Reason|Connections' Folder: F5 Common Create Backup (UCS) tmsh save sys ucs /var/tmp/$(echo $HOSTNAME | cut -d'.' -f1)-$(date +%Y%m%d_%H-%M) Folder: LTM Folder: GTM Folder: Virtual Folder: BIG-IQ chk status of CM&DCDs curl -s -u admin:admin --insecure https://localhost:9200/_cat/nodes?v cluster health curl -s -u admin:admin --insecure https://localhost:9200/_cluster/health?pretty remove unassigned shards curl -s -k https://localhost:9200/_cat/shards | grep UNAS | awk '{print $1}' | sort | uniq | sed 's/+/%2B/g' | while read line ; do curl -s -k -X DELETE "https://localhost:9200/$line" ; done cluster settings curl -s -u admin:admin --insecure https://localhost:9200/_cluster/settings | jq . cluster nodes health curl -s -u admin:admin --insecure https://localhost:9200/_cat/nodes?v stop big3d service bigstart status big3d; bigstart stop big3d restjavad log tail -f /var/log/restjavad.0.log elasticsearch.log tail -f /var/log/elasticsearch/eslognode.log Folder: capture (tcpdump) step1_enable tcpdump db tmsh modify sys db tcpdump.sslprovider value enable tcpdump with clientIP tcpdump -ni 0.0:nnnp -s0 --f5 ssl host [client ip address] -w /var/tmp/api-qa_tcpdump_client_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap tcpdump with VS and POOL ips tcpdump -ni 0.0:nnn -s0 --f5 ssl host [virtual server ip] or host [pool member ip] or host [pool member ip] -w /var/tmp/api-qa_tcpdump_VS_$(date +%d_%b_%H_%M_%S)_$HOSTNAME.pcap tcpdump logs 2 splunk tcpdump -nni 0.0 host 10.43.147.213 or host 10.43.147.214 or host 10.47.147.213 or host 10.47.147.214 and port 514 tcpdump list interfaces tcpdump -D Folder: NET net performance (show) tmsh show sys performance; uptime net traffic (show) tmsh show sys traffic vlans (list) tmsh list net vlan | grep "net vlan " routes (show) tmsh show /net route static routes (list) tmsh list /net route get route to IP ip route get 10.47.147.214 traceroute using port traceroute -T -p 514 10.43.147.214 netcat port open nc -v 10.43.147.214 514 mgmt-ip (list) tmsh list /sys management-ip mgmt-route (list) tmsh list /sys management-route self-ip (list) tmsh list net self sys ip-address (show) tmsh show sys ip-address sys ip stats (show) tmsh show sys ip-stat reset sys ip stats tmsh reset-stats sys ip-stat icmp stats (show) tmsh show sys icmp-stat pva-traffic (show) tmsh show sys pva-traffic tmm-traffic (show) tmsh show sys tmm-traffic reset tmm-traffic stats tmsh reset-stats sys tmm-traffic arp (show) tmsh show net arp del arp entries tmsh delete net arp all interfaces (show) tmsh show net interface all-properties disabled interfaces tmsh modify net interface 5.0 6.0 disabled reset interfaces tmsh reset-stats net interface netstat netstat -nputw snatpool (list) tmsh list /ltm snatpool 1Folder: SSL Folder: user Folder: AUDIT Folder: Logging
-
This is designed to introduce you to the core features of the F5 rSeries platform. It begins with a high-level view of the system components and defines key concepts such as F5OS and tenants. Next, it presents the management domains available and how the naming and numbering convention is used to identify the rSeries platform. Getting Started with rSeries This course is designed to introduce you to the core features of the F5 rSeries platform. The course begins with a high-level view of the system components… Lesson Objectives The F5® rSeries platforms are powerful systems that are designed specifically for application delivery performance, application security, and scalability. The rSeries delivers unprecedented levels of performance, and it leverages API-first deployment next generation platform software. At the end of this course, you will be able to: Identify the major components of the rSeries platform Describe the role of F5OS and tenants Interpret the rSeries naming and numbering convention Describe the Dashboard Identify the two Software Management Domains Introducing the r10000 The r10000 system provides the flexibility and feature-rich capabilities of F5 products on a powerful and highly extensible hardware platform. The front of the r10000 series platform includes the following as displayed in the graphic: The LCD touchscreen includes a Health menu which enables you to run LCD tests. The LED indicators of the various LEDs on the platform indicate the status of the system or component. The F5 logo ball LED indicates whether the system is powered on and if the locator function is enabled. The F5 ball will blink if the locator function is enabled. The r10000 series contains the following management ports as displayed in the graphic: Management port USB 3.0 port Serial console port Serial failover port (future use) r10000 supports the following interface ports as displayed in the graphic: 25GbE SFP28 ports (16) 100GbE QSFP28 ports (4) The back of the r10000 Series platform includes a removable fan tray, two power supply units (PSUs), and a chassis ground terminal. The r10000 Series platforms are available with either an AC, a standard DC, or a high-voltage DC (HVDC) configuration. Fan tray (removable) Power input panel 1 (AC power receptacle) Power input panel 2 (AC power receptacle) Chassis ground terminal F5 platforms support up to two AC, DC, or high-voltage DC (HVDC) hot-swappable power supply units (PSUs). Do not mix power supply unit models of different wattage. Use only PSUs of the same wattage and part number. The chassis has a removable fan try that is designed to maintain airflow throughout the chassis. The fans in the fan tray run constantly while the unit is powered on. Over time, the fans may wear out, requiring you to replace the fan tray. Introducing the r5000 The r5000 Series platform is a powerful system designed specifically for application delivery performance and scalability. The front of the r5000 series platform includes the following as displayed in the graphic: The LCD touchscreen includes a Health menu which enables you to run LCD tests. The LED indicators of the various LEDs on the platform indicate the status of the system or component. The F5 logo ball LED indicates whether the system is powered on and if the locator function is enabled. The F5 ball will blink if the locator function is enabled. The r5000 series contains the following management ports as displayed in the graphic: Management port USB 3.0 port Serial console port Serial failover port (future use) The r5000 series platform supports the following interface ports as displayed in the graphic: 100GbE QSFP28 ports (2) 25GbE SFP28 ports (8) The 5000 Series platform contains either AC power supply or DC power supply. The AC power supply units (PSUs) are displayed here. Do not mix power supply unit models of different wattage. Use only PSUs of the same wattage and part number. The 5000 Series platform contains either AC power supply or DC power supply. The DC power supply units (PSUs) are displayed here. rSeries Naming and Numbering Convention Example: r10900 Generation r = Vanquish/Pantera OR i = Shuttle Series Price Point Low = 4xxxx, 2xxxx Mid = 5xxxx High = 7xxxx, 10xxx PAYG (Pay As You Grow) 6 = Low 8 = High 9 = High+ Mid-Gen (new CPU) 0 Reserved 0 From left to right, the rSeries naming convention uses the first letter to identify the generation. The "r" represents the rSeries. The first number, "10" in this example, represents the series price point, whether it's low, medium, or high. The next number, "9" in our example, represents "Pay as You Grow," giving you the ability to upgrade from one tier to another through license keys. The next number, "0" in this case, represents mid-Generation. This would be used, for example, if a new CPU is introduced. The last digit, "0" is reserved. Interface Examples Configuring the rSeries appliance can be accomplished using API calls, the command line interface (CLI), or the graphical user interface (GUI). The following are examples of creating a BIG-IP tenant using an API call, the CLI, and the GUI. Create BIG-IP Tenant via API Create a BIG-IP tenant via an API call. Create BIG-IP Tenant via CLI Create a BIG-IP tenant via the CLI. Create BIG-IP Tenant via GUI Create a BIG-IP tenant via the GUI. Key Concepts: F5OS-A and Tenants F5OS-A F5OS-A is a host environment responsible for configuring, provisioning, and deploying BIG-IP tenants, as well as managing and monitoring the appliance hardware. The software provides all of the basic administration capabilities needed to manage the system software version, download BIG-IP tenant software, and manage tenants, manage the license, manage users, and configure VLANs and trunks for the tenants. It is a Kubernetes based platform layer for F5 systems that enables higher automation and multi tenancy. F5OS is delivered in two versions, F5OS-C for chassis (VELOS) and F5OS-A for appliance-based hardware (rSeries). Tenants A tenant is a guest system running software on the appliance (for example, a Classic BIG-IP system). You can run several tenants in the same appliance, by assigning them resources from the appliance. The maximum number of tenants that can be created in an appliance depends on the model and resources available on that model. The administrator can install BIG-IP Virtual Edition (VE) onto tenants. The rSeries uses Kubevirt to launch BIG-IP VE. The administrator downloads the tenant software image files from the F5 downloads site. (Note: requires login credentials.) A tenant will consist of one or more CPUs, memory, and storage. You manage the tenants using the API, CLI, or GUI. Tenants inherit certain capabilities, such as the license and VLANs, from the appliance. rSeries Software Management Domains There are two different Management Domains in the rSeries — the platform layer and the tenant layer. Each has its own management IP address, set of users, and its own software. Each Management Domain can be accessed via API, CLI, or GUI. F5OS-A is a host environment responsible for configuring, provisioning, and deploying BIG-IP tenants, as well as managing and monitoring the appliance hardware. The tenants are managed by the Tenant Administrators. Dashboard The dashboard displays a graphical view of the platform interfaces (ports) and high-level information about network ports, vCPUs, active alarms, and tenants. System Summary The System Summary displays information about system storage, hostname, IP address, product name, software versions, available vCPUs, and deployed vCPUs. Network The Network displays the current state for all system interfaces (ports) and port mappings. CPU The CPU displays information about CPU thread counts. Active Alarms Active Alarms determine where an event occurred, the severity, the description, and the time it occurred. The system updates the alarms every few seconds. It shows the source of the alert, its severity, a brief description of what occurred, and when it happened. The lower section includes an overview of tenants deployed on the system. About Licensing The license you receive from F5 determines what features and software modules the BIG-IP tenant will support. Before you can configure and use the rSeries system, you must activate a valid license. The license service coordinates the license installation on the system and configures the same license to the tenants. The license activation process is initiated with the base registration key. Review In this lesson, you learned how to: Identify the major components of the rSeries platform Describe the role of F5OS and tenants Interpret the rSeries naming and numbering convention Describe the Dashboard Identify the two rSeries Software Management Domains rSeries Datasheet f5-application-delivery-controller-system-rseries-data-sheet.pdf
-
This script is created to make it easy to force-offline a node or pool member and then reenable it. Create a file called node-maint.sh and paste the following #!/bin/bash # format to follow: # $1 node-enable|node-forceoffline|poolnode-enable|poolnode-forceoffline # $2 node-ipaddress|node-fqdn|member ip/fqdn and port # example node: node-maint.sh node-forceoffline 10.6.0.141 # example node: node-maint.sh node-enable 10.6.0.141 # example member: node-maint.sh poolnode-forceoffline # example member: node-maint.sh poolnode-enable /Production/10.40.152.116:8101 /Production/pool.alfa-fnt.hosangit.com.8101 echo "******* HELP *********" echo "ACTION options are:" echo "node-enable or node-forceoffline" echo "poolnode-enable or poolnode-forceoffline" echo "" echo "NODE options are:" echo "ip address or fqnd (you have to use however its configured)" echo "POOL MEMBER options are:" echo "/partition/ip|fqdn:port" echo "" echo "when enabling/disabling a pool member you must also include pool (example: node-maint.sh poolnode-enable /Production/10.40.152.116:8101 /Production/pool.alfa-fnt.hosangit.com.8101)" echo "****** END OF HELP ********" action=$1 node=$2 pool=$3 if [[ $action == node-enable ]] then echo "action selected = $action" echo "##START $node##";echo "##Current Status of $node##";tmsh show /ltm node $node | egrep ' Availability| State| Reason| Monitor Status| Session Status'; echo "##How many connections on $node?##";tmsh show /sys connection ss-server-addr $node; echo "##Enabling $node##";tmsh modify /ltm node $node state user-up session user-enabled; sleep 2;echo "##Show current status of node##";tmsh show /ltm node $node | egrep ' Availability| State| Reason| Monitor Status| Session Status';echo "##FINISHED $node##" elif [[ $action == node-forceoffline ]] then echo "action selected = $action" echo "##START $node##";echo "##Current Status of $node##";tmsh show /ltm node $node | egrep ' Availability| State| Reason| Monitor Status| Session Status'; echo "##How many connections on $node?##";tmsh show /sys connection ss-server-addr $node; echo "##Forcing $node Offline##";tmsh modify /ltm node $node state user-down session user-disabled; sleep 2;echo "##Show current status of node##";tmsh show /ltm node $node | egrep ' Availability| State| Reason| Monitor Status| Session Status';echo "##-->Checking number of connections just run:tmsh show /sys connection ss-server-addr $node";tmsh show /sys connection ss-server-addr $node;echo "##FINISHED $node##" elif [[ $action == poolnode-enable ]] then echo "action selected = $action" elif [[ $action == poolnode-forceoffline ]] then echo "action selected = $action" else echo "no action set" fi
-
These are a couple ways to see what users on your F5 are doing TMSH Commands cat /home/x_*/.tmsh-history-x_* LINUX Commands cat /home/x_*/.bash_history
-
Differences between Force Offline vs Disabled Force Offline Forced Offline: Specifies that the node can handle only active connection. That's means that F5 continues to manage connections already established only. When set to Forced Offline, a node or pool member allows existing connections to time out, but no new connections are allowed. tmsh modify /ltm node <node name> state user-down session user-disabled Disabled Disabled: Specifies that the node/pool member can handle only persistent or active connections. That's means that F5 continues to manage connections already established and everything in the persistence table (or connection with right cookie persistence) When set to Disabled, a node or pool member continues to process persistent and active connections. It can accept new connections only if the connections belong to an existing persistence session. tmsh modify /ltm node <node name> session user-disabled In both case F5 will remove the connections, but Force Offline is faster. If the node is set to Disabled or Forced Offline, any pool member in the BIG-IP configuration that uses that same IP address is also set to Disabled or Forced Offline. When you remove a pool member from a pool, the system immediately discontinues the pool member monitoring and removes existing persistence entries. This action does not disrupt existing established connections. They remain open until the client disconnects or the connection times out. Delete existing connections to the Disabled or Forced Offline node (Optional) If after disabling or forcing the node offline, you want to delete all connections to that node, perform the following procedure. Impact of procedure: This command deletes all connections to the node's specific IP and every possible service port. Note: Connections to nodes are on the server side (ss-server) of the BIG-IP connection. Connections to virtual servers are on the client side (cs-server) of the BIG-IP connection. To delete all connections to the node, use the following command syntax: tmsh delete /sys connection ss-server-addr <node IP address>
-
I've been tasked with sending specific text to our SPLUNK from our F5 devices every hour. So let's walk through on how to do that. This "task" is broken up into a few sections/to do's Create Script that will run snmp create entry with specific text VALIDATE you can see specific text in SPLUNK Add script to crontab on F5 to run every hour SPLUNK check for specific text and if not receive 3 entries of specific text in 3hrs then send alert. STEP1 TEST box, do you see SNMP entries for the past 24hrs? index="infra_network" sourcetype="f5:bigip:syslog" usdet2slbtst0* YES STEP2 IDENTIFY the SNMP command needed to send text to SPLUNK using netcat COMMAND: echo '<0><descriptive message>' | nc -w 1 -u <IP_address_of_syslog_server> <port_of_syslog_server> EXAMPLE: echo '<0>netcat test message for Cowboy' | nc -w 1 -u 10.47.147.214 514 If it doesn't work the most common error you get is: Ncat: Could not resolve hostname "10.47.147.214 514": Name or service not known. QUITTING If it does work it should look similar to the below image You can also do tests with netcat to see if TCP and/or UDP ports are open by running the same command as above but just a little different. Let me show you: TCP Test: echo "<14>Cowboy Test TCP syslog message" >> /dev/tcp/10.47.147.214/514 UDP Test: echo "<14>Cowboy Test UDP syslog message" >> /dev/udp/10.47.147.214/514 After running both commands above, I then search splunk for a unique word in my message like Cowboy, I see only UDP made it so TCP isn't supported IDENTIFY the SNMP command needed to send text to SPLUNK using logger COMMAND: logger -p <facility>.<level> "<descriptive message>" EXAMPLE: logger -p local0.notice "logger test message for Cowboy" STEP3 Add script to crontab to run the SNMP command to run every hour on the hour crontab -e 0 * * * * echo '<0>netcat hourly big-ip test message' | nc -w 1 -u 10.47.147.214 514 STEP4 Confirm you see message coming into SPLUNK Example of my query for my environment: index="infra_network" sourcetype="f5:bigip:syslog" host=txsat1slbdv0* "big-ip test message"
-
Self IP address is an IP address on the f5 system that you associate with a VLAN, to access hosts in that VLAN Basically management IP is used to manage f5 device configuration, monitoring SNMP, etc. By default, the BIG-IP system allows access to the protocols and ports on the management interface listed in the following table. Service Port Protocol Description SSH 22 TCP Secure Shell protocol HTTPS 443 TCP Hypertext Transfer Protocol Secure protocol SNMP 161 TCP Simple Network Management Protocol SNMP 161 UDP Simple Network Management Protocol F5 HA 1026 UDP Network failover communication for high availability F5 iQuery 4353 TCP iQuery protocol
-
From CLI (command line interface) run the following command to list the current management IP address tmsh list /sys management-ip Also check the management route is in place tmsh list /sys management-route If nothing is configured or you want to change it then follow the below to assign the management IP to your F5 device tmsh create /sys management-ip [ip address/netmask] Example: tmsh create /sys management-ip 10.46.48.13/23 Don't forget to create the management route as well tmsh create /sys management-route default gateway <gateway ip address> Example: tmsh create /sys management-route default gateway 10.46.48.1 Now when you run the list command it should look similar to below tmsh list /sys management-ip sys management-ip 10.46.48.13/23 { description configured-statically } tmsh list /sys management-route sys management-route default { gateway 10.46.48.1 network default } The configuration option Network specifies the trap network. Management: Specifies that the system sends the trap out of the management IP address (or the cluster management IP address, if this is a clustered configuration). Other: Specifies that the system sends the trap out of the interface based on the routing tables. NOTE: By default, the SNMP trap egresses from the TMM interface if the trap destination is accessible through Management and TMM interfaces. Also, if there is a TMM route and Management route to the same trap destination, the traps will always egress from the TMM interface and not through the Management interface. To look at the routing table for both Management and TMM, use the following command: netstat -nr To view all existing TMM routes, type the following command: tmsh show /net route To view routes in the routing table, main, type the following command: ip route show table main To view the management routing table routes (table 245), type the following command: ip route show table 245
-
You can also remove old/unused partitions # tmsh show sys software status ------------------------------------------------------------------------- Sys::Software Status Volume Slot Product Version Build Active Status Allowed Version ------------------------------------------------------------------------- HD1.1 1 BIG-IP 15.1.7 0.0.6 yes complete yes HD1.2 1 BIG-IP 15.1.5 0.0.10 no complete yes Since HD1.2 isn't being used # tmsh delete /sys software volume HD1.2 To check inode usage, run the df -i command. # df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/mapper/vg--db--vda-set.1.root 112640 8081 104559 8% / devtmpfs 2934640 459 2934181 1% /dev tmpfs 2936621 246 2936375 1% /dev/shm tmpfs 2936621 905 2935716 1% /run tmpfs 2936621 2 2936619 1% /sys/fs/cgroup /dev/mapper/vg--db--vda-set.1._usr 350880 89564 261316 26% /usr /dev/mapper/vg--db--vda-set.1._var 196608 105372 91236 54% /var none 2936621 80 2936541 1% /var/tmstat prompt 2936621 8 2936613 1% /var/prompt /dev/mapper/vg--db--vda-dat.share.1 2621440 1722 2619718 1% /shared none 2936621 47 2936574 1% /shared/rrd.1.2 /dev/mapper/vg--db--vda-set.1._config 208000 5254 202746 3% /config /dev/mapper/vg--db--vda-dat.log.1 917504 464 917040 1% /var/log none 2936621 12 2936609 1% /run/pamcache none 2936621 1 2936620 1% /var/loipc /dev/loop0 0 0 0 - /var/apm/mount/apmclients-7221.2022.412.1126-5816.0.iso /dev/mapper/vg--db--vda-app.ASWADB.set.1.mysqldb 786432 3062 783370 1% /var/lib/mysql /dev/mapper/vg--db--vda-app.avr.dat.avrdata 249984 10 249974 1% /shared/avr If the inode usage is near or at 100 percent, move any unnecessary maintenance-related files from the BIG-IP system to a network share storage and schedule a time to reboot the BIG-IP system. To locate directories that has many files that would contribute to inode usage, you can use the following find command to locate the 20 largest directories that have most files in the /var partition: find /var -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -nr | head -20 You should delete old tcpdump, qkview, and core files from the system or move them to a network share. Check for deleted files still using disk space. # lsof -ws | grep -i 'size\|deleted' You can ignore anything in memory since it isn't taking up any disk space which would include /var/tmstat /proc tmpfs hugetlbfs
-
If the /var disk space is full, an alert is generated on CLI. Disk partition /shared has only 0% free To identify the disk space usage, run the command from the bash prompt df -h First attempt at reducing the amount of /var space being used is... Performing the following procedure, the BIG-IP REST API is temporarily inaccessible, and higher disk IO may be seen. Run the following commands, in sequence: # bigstart stop restjavad # rm -rf /var/config/rest/storage*.zip # rm -rf /var/config/rest/*.tmp # bigstart start restjavad You can also try running an F5 bash command to cleanup tmp directories bash /usr/local/bin/clean_tmsh_tmp_dirs If the above doesn't work then you have to try something else. To check files which occupy more space in /var directory, execute the command find /var/ -xdev -type f -exec du {} \; | sort -rn | head -20 Examine the above output and determine files which occupy more space and delete the unused files using admin credentials. rm <filename> If APM is provisioned (sometimes even when its not provisioned) EPSEC packages could be filling up space on drive and in the UCS file. Log in to tmsh by entering the following command: tmsh Identify if you have any EPSEC packages installed list /apm epsec epsec-package recursive Delete the epsec-package using the following command syntax: delete /apm epsec epsec-package all NOTE: the one that is in use doesn't go away, just the non-used packages Note: It may take a few minutes for this command to reflect that the package deletion completed. You can validate the package was deleted by entering the following command: list /apm epsec epsec-package recursive
-
If the /shared disk space is full, an alert is generated on CLI. Disk partition /shared has only 0% free Delete the old image files (.iso) or pcap files (.pcap) saved in location - /shared/tmp/images , this frees up some space in /shared directory. To identify the disk space usage, run the command from the bash prompt df -h To check files which occupy more space in /shared directory, execute the command find /shared/ -xdev -type f -exec du {} \; | sort -rn | head -20 Examine the above output and determine files which occupy more space and delete the unused files using admin credentials. rm <filename>