Checkpoint: Smartcenter Migration Tools – R65, R70, R71, R75, R76, R77

This page will be updated as new tools become available; please note that you need valid usercentre credentials to download the files.

R77 Migration Tools –  Gaia / SecurePlatform / Linux / Windows / Solaris

R76 Migration Tools – Windows / SecurePlatform / RHEL / Gaia / IPSO 6 / Solaris

R75 Migration Tools – Windows / SecurePlatform / Linux / IPSO 6 / Solaris

 

Checkpoint: Troubleshooting VRRP on Nokia Firewalls

Troubleshooting VRRP on Nokia Checkpoint Firewalls

The purpose of this article is to help in troubleshooting VRRP related issues on NOkia Checkpoint Firewalls. One of the most common problems faced in Nokia VRRP Implementations is that interfaces on active and standby firewalls go into the master master state. THe main reason for this is because the individual vrids of the master and backup firewall are not able to see the vrrp multicast requests of each other.

 

The first step is to check the vrrp state of the interfaces. THis is how you can check that:

 

PrimaryFW-A[admin]# iclid
PrimaryFW-A> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
0 in Backup state
6 in Master state
PrimaryFW-A>
PrimaryFW-A> exit

 

Bye.
PrimaryFW-A[admin]#

 

SecondaryFW-B[admin]# iclid
SecondaryFW-B> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
4 in Backup state
2 in Master state
SecondaryFW-B>
SecondaryFW-B> exit

 

Bye.
SecondaryFW-B[admin]#

 

In the example shown you see that 2 interfaces each from both firewalls are in the Master state.

 

The next step should involve running tcpdumps to see if the vrrp multicasts are reaching the particular interface.

 

As the first troubleshooting measure, put a tcpdump on the problematic interface of the master and backup firewalls. If you want to know what the problematic interface is, “echo sh vrrp int | iclid” should give you the answer. It is that interface on the backup firewall which would be in a Master state.

 

PrimaryFW-A[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:46:11.379961 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:12.399982 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:13.479985 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:14.560007 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]

 

When you put a tcpdump on the Primary Firewall, you see that the vrrp multicast request is leaving the interface.

 

Next put the tcpdump on the secondary firewall.

 

SecondaryFW-B[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:19:38.507294 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:39.527316 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:40.607328 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:41.687351 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:42.707364 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]

 

Now you can see that the interface on both the primary and the secondary firewalls are broadcasting vrrp multicasts. This is because the vrrp multicasts are not reaching the firewalls interfaces. This means there is a communication breakdown which can be possibly caused by network issues.

 

Once the network issue is resolved, communication would be possible and the interface with the lower priority will go as the secondary or backup state.

 

Now let us discuss another scenario where there is a problem with the firewall interfaces in Master Master state.

 

Again put a tcpdump on both the interfaces in question:

 

PrimaryFW-A[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:46:11.206994 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:11.379961 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:12.286990 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:12.399982 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:13.307014 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:13.479985 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:14.387098 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:14.560007 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
00:46:15.467064 I 10.10.10.1 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 95 [tos 0xc0]
00:46:15.580010 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]

 

SecondaryFW-B[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
00:19:38.507294 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:38.630075 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]
00:19:39.527316 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:39.710131 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]
00:19:40.607328 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:40.790142 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]
00:19:41.687351 O 192.168.1.2 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 95 [tos 0xc0]
00:19:41.810150 I 10.10.10.2 > 224.0.0.18: VRRPv2-adver 20: vrid 103 pri 100 [tos 0xc0]

 

In the above example look at the vrid numbers of the incoming and outgoing packets. From the vrids you see that that the vrids donot match. This is an indication that the cabling is not correct. The cables going to vrid 102 and 103 are not connected correctly and they need to be swapped to fix this issue.

 

Swap the cables and the issue will be resolved. The firewall with the higher priority will go into the Master state.

 

A properly functioning firewall will be like this:

 

PrimaryFW-A[admin]# iclid
PrimaryFW-A> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
0 in Backup state
6 in Master state
PrimaryFW-A> exit

 

Bye.
PrimaryFW-A[admin]#

 

SecondaryFW-B[admin]# iclid
SecondaryFW-B> sh vrrp

 

VRRP State
Flags: On
6 interface enabled
6 virtual routers configured
0 in Init state
6 in Backup state
0 in Master state
SecondaryFW-B> exit

 

Bye.
SecondaryFW-B[admin]#

 

If you were to tcpdump the healthy interface, this is how it would look:

 

PrimaryFW-A[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
18:25:44.015711 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:45.095726 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:46.175751 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:47.195770 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:48.275819 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:25:49.355812 O 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
^C
97 packets received by filter
0 packets dropped by kernel
PrimaryFW-A[admin]#

 

SecondaryFW-B[admin]# tcpdump -i eth-s4p2c0 proto vrrp
tcpdump: listening on eth-s4p2c0
18:26:07.415446 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:08.495451 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:09.515480 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:10.595486 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:11.675485 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:12.695522 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
18:26:13.775590 I 192.168.1.1 > 224.0.0.18: VRRPv2-adver 20: vrid 102 pri 100 [tos 0xc0]
^C
14 packets received by filter
0 packets dropped by kernel
SecondaryFW-B[admin]#

Credit to Secmanager

 

Checkpoint: Install the IPSO 4.x Bootmanager on a Flash-based System Running IPSO 6.x

Taken from sk44643

1. Copy the IPSO 4.x bootmgr to the system using winscp, ftp etc. The file should be in the format nkipflash-4.x.bin

2. Use upgrade_bootmgr to write the 4.2 bootmgr to the correct location on flash.

fw[admin]# upgrade_bootmgr nkipflash-4.x.bin (do not specify the boot device)

You may get an error here as the device is required; if so, specify /dev/wd0:

fw[admin]# upgrade_bootmgr nkipflash-4.x.bin /dev/wd0

3. Reboot and stop at the BOOTMGR> prompt.

BOOTMGR> sh
# disklabel -r /dev/wd0s4 > /tmp/label
# disklabel -R /dev/wd0s4 /tmp/label
# exit
BOOTMGR> install

4. Go ahead and do your IPSO install; when complete, reboot.