Results presented here show the maximum throughput of Kea using:
Tests here are using simplest configuration possible with default configuration.
ISC's perfdhcp generates simple DORA/SARR exchanges without any additional options or option requests. Normally we configure perfdhcp to simulate up to 500 million different clients (to avoid repeating any client IDs in the course of a single test) unless it's stated differently in the test description.
These tests use the basic Kea configuration described in the baseline test report with the simplest traffic characteristics. Other than thread count, the configuration is exactly the same as that used in the "Baseline results" test.
1. The first of these tests (results in the bar charts below) measures how the different Kea backends respond to variation in the number of threads used. The results of this test show that different Kea configurations require different settings for `thread-pool-size` to achieve optimum performance, measured in leases per second.
2. The second test (results in the line charts below) are using the value of `thread-pool-size` which had the highest results for that scenario from the first test. We vary the `packet-queue-size` value and measure the leases per second . This test shows that the optimum queue size depends on the Kea configuration.
We use the thread count settings ("thread-pool-size" and "packet-queue-size") that provide the optimum results from this test to establish the thread count configuration for the other tests included in this report.
Each test has an additional description. Set of bar charts display results comparison of single runs presented in second part of this page.
How 3000 (30% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: None, default settings
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) global reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) global reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) global reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 3000 (30% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 3000 (30% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 10000 (100% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 10000 (100% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) subnet reservations kept in memfile decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) subnet reservations kept in mysql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 14 threads, queue size 160 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How 15000 (150% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 8 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How 15000 (150% of all clients) subnet reservations kept in postgresql decreases performance (leases in memfile)
Reservation optimization: host-reservation-identifiers set to hw-address, and reservation out-of-pool
MT settings: 6 threads, queue size 20 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Each test has an additional description.
Checking if memory usage still grows when there are no new clients, just returning. 10mln unique clients. 4 threads, queue size 4 per thread Memory usage should stop growing after ~379.Traffic is generated for 1138 but Kea is monitored for additional 100 seconds.
Testing node 1: kea-dhcp6.conf
Checking if memory usage still grows when there are no new clients, just returning. 10mln unique clients. 8 threads, queue size 4 per thread Memory usage should stop growing after ~367.Traffic is generated for 1101 but Kea is monitored for additional 100 seconds.
Testing node 1: kea-dhcp4.conf
Checking if memory usage still grows when there are no new clients, just returning. 10mln unique clients. 14 threads, queue size 160 per thread Memory usage should stop growing after ~1252.Traffic is generated for 3758 but Kea is monitored for additional 100 seconds.
Testing node 1: kea-dhcp6.conf
Checking if memory usage still grows when there are no new clients, just returning. 10mln unique clients. 14 threads, queue size 160 per thread Memory usage should stop growing after ~1232.Traffic is generated for 3698 but Kea is monitored for additional 100 seconds.
Testing node 1: kea-dhcp4.conf
Checking if memory usage still grows when there are no new clients, just returning. 10mln unique clients. 6 threads, queue size 20 per thread Memory usage should stop growing after ~1305.Traffic is generated for 3917 but Kea is monitored for additional 100 seconds.
Testing node 1: kea-dhcp6.conf
Checking if memory usage still grows when there are no new clients, just returning. 10mln unique clients. 8 threads, queue size 20 per thread Memory usage should stop growing after ~1213.Traffic is generated for 3639 but Kea is monitored for additional 100 seconds.
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /10
Testing node 1: kea-dhcp6.conf
Check different allocators: random
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: random
Subnet is /10
Testing node 1: kea-dhcp6.conf
Check different allocators: flq
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /10
Testing node 1: kea-dhcp6.conf
Check different allocators: random
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: random
Subnet is /10
Testing node 1: kea-dhcp6.conf
Check different allocators: flq
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /10
Testing node 1: kea-dhcp6.conf
Check different allocators: random
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: random
Subnet is /10
Testing node 1: kea-dhcp6.conf
Check different allocators: flq
Subnet is /10
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /9
Testing node 1: kea-dhcp6.conf
Check different allocators: random
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: random
Subnet is /9
Testing node 1: kea-dhcp6.conf
Check different allocators: flq
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /9
Testing node 1: kea-dhcp6.conf
Check different allocators: random
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: random
Subnet is /9
Testing node 1: kea-dhcp6.conf
Check different allocators: flq
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: iterative
Subnet is /9
Testing node 1: kea-dhcp6.conf
Check different allocators: random
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: random
Subnet is /9
Testing node 1: kea-dhcp6.conf
Check different allocators: flq
Subnet is /9
Testing node 1: kea-dhcp4.conf
Check different allocators: flq
Subnet is /8
Number of subnets: 1
Testing node 1: kea-dhcp4.conf
Check different allocators: flq
Subnet is /8
Number of subnets: 4
Testing node 1: kea-dhcp4.conf
Check different allocators: flq
Subnet is /8
Number of subnets: 8
Testing node 1: kea-dhcp4.conf
Each test has an additional description
How adding 100 subnets decreases performance.
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
How adding 100 subnets decreases performance.
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
How adding 200 subnets decreases performance.
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
How adding 200 subnets decreases performance.
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
How adding 100 subnets decreases performance.
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp6.conf
How adding 100 subnets decreases performance.
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp4.conf
How adding 200 subnets decreases performance.
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp6.conf
How adding 200 subnets decreases performance.
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp4.conf
How adding 100 subnets decreases performance.
6 threads, queue size 20 per thread
Testing node 1: kea-dhcp6.conf
How adding 100 subnets decreases performance.
8 threads, queue size 20 per thread
Testing node 1: kea-dhcp4.conf
How adding 200 subnets decreases performance.
6 threads, queue size 20 per thread
Testing node 1: kea-dhcp6.conf
How adding 200 subnets decreases performance.
8 threads, queue size 20 per thread
Testing node 1: kea-dhcp4.conf
Each test has an additional description
Lets run v4 HA hotstandby setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA hotstandby setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA hotstandby setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA hotstandby setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA hotstandby setup with postgresql lease backend.
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA hotstandby setup with postgresql lease backend.
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA loadbalancing setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA loadbalancing setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA loadbalancing setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA loadbalancing setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA loadbalancing setup with postgresql lease backend.
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA loadbalancing setup with postgresql lease backend.
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA passive-backup setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA passive-backup setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA passive-backup setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA passive-backup setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA passive-backup setup with postgresql lease backend.
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA passive-backup setup with postgresql lease backend.
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v6 HA passive-backup setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
"wait-backup-ack" value is set to True!
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA passive-backup setup with mysql lease backend.
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
"wait-backup-ack" value is set to True!
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA passive-backup setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
"wait-backup-ack" value is set to True!
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA passive-backup setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
"wait-backup-ack" value is set to True!
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA passive-backup setup with postgresql lease backend.
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
"wait-backup-ack" value is set to True!
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA passive-backup setup with postgresql lease backend.
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
"wait-backup-ack" value is set to True!
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp4.conf
Lets run v4 HA hotstandby setup with memfile lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA hotstandby setup with memfile lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA hotstandby setup with mysql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA hotstandby setup with mysql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA hotstandby setup with postgresql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA hotstandby setup with postgresql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA loadbalancing setup with memfile lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA loadbalancing setup with memfile lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA loadbalancing setup with mysql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA loadbalancing setup with mysql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA loadbalancing setup with postgresql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA loadbalancing setup with postgresql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA passive-backup setup with memfile lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA passive-backup setup with memfile lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA passive-backup setup with mysql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA passive-backup setup with mysql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v4 HA passive-backup setup with postgresql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
8 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA passive-backup setup with postgresql lease backend.
And 3rd server configured as backup but it will be offline through entire test (Kea don't wait for responses from backup)
Multi threading settings:
6 threads, queue size 20 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 800 seconds!
Lets run v6 HA passive-backup setup with mysql lease backend.
Backup server is offline during: full test
Number of used clients: 10
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with mysql lease backend.
Backup server is offline during: full test
Number of used clients: 10
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with memfile lease backend.
Backup server is offline during: full test
Number of used clients: 10
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with memfile lease backend.
Backup server is offline during: full test
Number of used clients: 10
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with mysql lease backend.
Backup server is offline during: first half test
Number of used clients: 10
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with mysql lease backend.
Backup server is offline during: first half test
Number of used clients: 10
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with memfile lease backend.
Backup server is offline during: first half test
Number of used clients: 10
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with memfile lease backend.
Backup server is offline during: first half test
Number of used clients: 10
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with mysql lease backend.
Backup server is offline during: second half test
Number of used clients: 10
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with mysql lease backend.
Backup server is offline during: second half test
Number of used clients: 10
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with memfile lease backend.
Backup server is offline during: second half test
Number of used clients: 10
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with memfile lease backend.
Backup server is offline during: second half test
Number of used clients: 10
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with mysql lease backend.
Backup server is offline during: full test
Number of used clients: 50000000
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with mysql lease backend.
Backup server is offline during: full test
Number of used clients: 50000000
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with memfile lease backend.
Backup server is offline during: full test
Number of used clients: 50000000
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with memfile lease backend.
Backup server is offline during: full test
Number of used clients: 50000000
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with mysql lease backend.
Backup server is offline during: first half test
Number of used clients: 50000000
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with mysql lease backend.
Backup server is offline during: first half test
Number of used clients: 50000000
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with memfile lease backend.
Backup server is offline during: first half test
Number of used clients: 50000000
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with memfile lease backend.
Backup server is offline during: first half test
Number of used clients: 50000000
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with mysql lease backend.
Backup server is offline during: second half test
Number of used clients: 50000000
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with mysql lease backend.
Backup server is offline during: second half test
Number of used clients: 50000000
Multi threading settings:
14 threads, queue size 160 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v6 HA passive-backup setup with memfile lease backend.
Backup server is offline during: second half test
Number of used clients: 50000000
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run v4 HA passive-backup setup with memfile lease backend.
Backup server is offline during: second half test
Number of used clients: 50000000
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256
Perfdhcp stops after 400!
Lets run 4 hotstandby setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
Lets run 6 hotstandby setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
Lets run 4 loadbalancing setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
Lets run 6 loadbalancing setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
Lets run 4 hotstandby setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
Lets run 6 hotstandby setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
Lets run 4 loadbalancing setup with memfile lease backend.
Multi threading settings:
8 threads, queue size 4 per thread
Lets run 6 loadbalancing setup with memfile lease backend.
Multi threading settings:
4 threads, queue size 4 per thread
Lets run v4 HA hotstandby setup with memfile lease backend with Kea DDNS server enabled on both nodes. Updates are send to bind9 server which is running only on node 1.
Performance degradation can be huge due to multiple servers running on single system, but the goal of this test is to check stability.
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp-ddns.conf
Testing node 1: kea-dhcp4.conf
Testing node 1: named.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp-ddns.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA hotstandby setup with memfile lease backend with Kea DDNS server enabled on both nodes. Updates are send to bind9 server which is running only on node 1.
Performance degradation can be huge due to multiple servers running on single system, but the goal of this test is to check stability.
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp-ddns.conf
Testing node 1: kea-dhcp6.conf
Testing node 1: named.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp-ddns.conf
Testing node 2: kea-dhcp6.conf
Lets run v4 HA loadbalancing setup with memfile lease backend with Kea DDNS server enabled on both nodes. Updates are send to bind9 server which is running only on node 1.
Performance degradation can be huge due to multiple servers running on single system, but the goal of this test is to check stability.
Multi threading settings:
8 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp-ddns.conf
Testing node 1: kea-dhcp4.conf
Testing node 1: named.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp-ddns.conf
Testing node 2: kea-dhcp4.conf
Lets run v6 HA loadbalancing setup with memfile lease backend with Kea DDNS server enabled on both nodes. Updates are send to bind9 server which is running only on node 1.
Performance degradation can be huge due to multiple servers running on single system, but the goal of this test is to check stability.
Multi threading settings:
4 threads, queue size 4 per thread
"parked-packet-limit" is set to 256Traffic is generated 800 and systems are monitored for another 100 seconds
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp-ddns.conf
Testing node 1: kea-dhcp6.conf
Testing node 1: named.conf
Testing node 2: kea-ctrl-agent.conf
Testing node 2: kea-dhcp-ddns.conf
Testing node 2: kea-dhcp6.conf
In Scenarios 1 & 2 we show the impact of using Forensic Logging (using the default logging template). Scenarios 3 & 4 show the benefit of using the Lease Caching feature, both for DHCPv4 and DHCPv6. Please view the mouseover text for additional test scenario information.
How enabling default legal logging will impact performance
8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How enabling default legal logging will impact performance
4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp6.conf
Check how different classification configurations impact performance.
Testing node 1: kea-dhcp4.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'memfile's lease database backend.In this case no leases will be expired during the test (validlifetime is 10000 which is more than test duration)
Testing node 1: kea-dhcp6.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'memfile's lease database backend.In this case no leases will be expired during the test (validlifetime is 10000 which is more than test duration)
Testing node 1: kea-dhcp4.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'mysql's lease database backend.In this case no leases will be expired during the test (validlifetime is 10000 which is more than test duration)
Testing node 1: kea-dhcp6.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'mysql's lease database backend.In this case no leases will be expired during the test (validlifetime is 10000 which is more than test duration)
Testing node 1: kea-dhcp4.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'postgresql's lease database backend.In this case no leases will be expired during the test (validlifetime is 10000 which is more than test duration)
Testing node 1: kea-dhcp6.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'postgresql's lease database backend.In this case no leases will be expired during the test (validlifetime is 10000 which is more than test duration)
Testing node 1: kea-dhcp4.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'memfile's lease database backend.In this case there will be always more expired leases than single expiration query can handle (validlifetime is 10)
Testing node 1: kea-dhcp6.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'memfile's lease database backend.In this case there will be always more expired leases than single expiration query can handle (validlifetime is 10)
Testing node 1: kea-dhcp4.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'mysql's lease database backend.In this case there will be always more expired leases than single expiration query can handle (validlifetime is 10)
Testing node 1: kea-dhcp6.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'mysql's lease database backend.In this case there will be always more expired leases than single expiration query can handle (validlifetime is 10)
Testing node 1: kea-dhcp4.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'postgresql's lease database backend.In this case there will be always more expired leases than single expiration query can handle (validlifetime is 10)
Testing node 1: kea-dhcp6.conf
Each backend has different lease reclamation process, this test checks performance impact of lease reclamation process using default configuration and 'postgresql's lease database backend.In this case there will be always more expired leases than single expiration query can handle (validlifetime is 10)
Testing node 1: kea-dhcp4.conf
In this test clients renew an assigned address immediately after it is assigned. Such behaviour is against protocol but it happens. The lease cache helps mitigate the effects misbehaving clients have on Kea. Please compare the traffic details with test below.Multi threading settings:
8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
How enabling lease cache ("cache-threshold": 0.5 on global level) will impact performance, traffic details should be directly compared with charts above with feature disabled.Multi threading settings:
8 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp4.conf
In this test clients renew an assigned address immediately after it is assigned. Such behaviour is against protocol but it happens. The lease cache helps mitigate the effects misbehaving clients have on Kea. Please compare the traffic details with test below.Multi threading settings:
4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
How enabling lease cache ("cache-threshold": 0.5 on global level) will impact performance, traffic details should be directly compared with charts above with feature disabled.Multi threading settings:
4 threads, queue size 4 per thread
Testing node 1: kea-ctrl-agent.conf
Testing node 1: kea-dhcp6.conf
Request all addresses from single subnet via bulk leasequery with 50 seconds intervals.
Number of subnets: 1
Number of connections with the same request: 1
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
Request all addresses from single subnet via bulk leasequery with 50 seconds intervals.
Number of subnets: 1
Number of connections with the same request: 1
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
Request all addresses from single subnet via bulk leasequery with 50 seconds intervals.
Number of subnets: 10
Number of connections with the same request: 10
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
Request all addresses from single subnet via bulk leasequery with 50 seconds intervals.
Number of subnets: 10
Number of connections with the same request: 10
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
Limited number of client to 150000 to see mem usage impact of just BLQRequest all addresses from single subnet via bulk leasequery with 50 seconds intervals.
Number of subnets: 1
Number of connections with the same request: 1
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
Limited number of client to 150000 to see mem usage impact of just BLQRequest all addresses from single subnet via bulk leasequery with 50 seconds intervals.
Number of subnets: 1
Number of connections with the same request: 1
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
This section provides insight into how Kea performs second by second during longer running tests. This information is primarily useful for the development team and does not provide guidance on any user-settings.
Thanks to this data visualization we can:
Check how Kea response rates changes over longer time of maximum load.
Multi threading settings:
8 threads, queue size 4 per thread
Check how Kea response rates changes over longer time of maximum load.
Multi threading settings:
4 threads, queue size 4 per thread
Check how Kea response rates changes over longer time of maximum load.
Multi threading settings:
14 threads, queue size 160 per thread
Check how Kea response rates changes over longer time of maximum load.
Multi threading settings:
14 threads, queue size 160 per thread
Check how Kea response rates changes over longer time of maximum load.
Multi threading settings:
8 threads, queue size 20 per thread
Check how Kea response rates changes over longer time of maximum load.
Multi threading settings:
6 threads, queue size 20 per thread
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using iterative.
Multi threading settings:
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using iterative.
Multi threading settings:
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using iterative.
Multi threading settings:
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp4.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using iterative.
Multi threading settings:
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp6.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using iterative.
Multi threading settings:
8 threads, queue size 20 per thread
Testing node 1: kea-dhcp4.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using iterative.
Multi threading settings:
6 threads, queue size 20 per thread
Testing node 1: kea-dhcp6.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using random.
Multi threading settings:
8 threads, queue size 4 per thread
Testing node 1: kea-dhcp4.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using random.
Multi threading settings:
4 threads, queue size 4 per thread
Testing node 1: kea-dhcp6.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using random.
Multi threading settings:
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp4.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using random.
Multi threading settings:
14 threads, queue size 160 per thread
Testing node 1: kea-dhcp6.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using random.
Multi threading settings:
8 threads, queue size 20 per thread
Testing node 1: kea-dhcp4.conf
Check how Kea response rates changes over longer time of maximum load while using different allocator.
This test is using random.
Multi threading settings:
6 threads, queue size 20 per thread
Testing node 1: kea-dhcp6.conf
This page contains a history of the baseline performance testing results to help in detecting changes in performance.
All charts have their own scale. Although comparison would be easier if we used the same scale across all charts, it would be harder to see the changes in performance for the slower backends. For comparison between backends please go to baseline results.
Welcome to the Kea performance testing report, this document is generated automatically after each test run.
In the section "Testing setup" we describe the common factors that apply to all tests. Specific test details and explanations can be found with the results.
All tests in this report can be divided into two groups. The first type (A) is where we are calculating the top performance of this Kea version, and the second type (B) is where we are testing Kea with different features, settings, or longer periods of time to observe the impact of various configurations and features, as compared with the maximum throughput measured in (A).
Calculating the maximum performance is a multistage process, with a couple of assumptions:
Testing is performed in ISC's internal network and uses 3 systems. Two are running Kea and database backends (specifications below) and one is running perfdhcp. All three are connected to one VLAN using 1 gigabit ethernet network.
Tests were executed using:
Configurations vary between tests and test types, details are described with the test results.
If not stated explicitly in the test description Kea has default configuration values.
Unless stated differently in the test, only the basic 4 message exchange (SARR and DORA) is generated. No release/renew or rebind packets are generated.
Each client performs an exchange just once. Perfdhcp will simulate up to 500 million unique clientIDs so Kea will not recognize any client as returning.
Messages do not include any additional options, except those necessary to get an address from the DHCP server.
We use the traffic generator "perfdhcp" for all tests. "perfdhcp" is developed by ISC and is available in Kea source/packages.
We encourage the user to refer to the KEA ARM for more details.
Performance testing results are volatile, multiple factors have to be taken into account e.g.: hardware, OS type, network, database location (local, remote), compilation CXX flags etc.
The results shown in this report are what we were able to observe within our testing network - they are not necessarily representative of what will be observed in another network.
The Kea development team takes performance and stability very seriously - please report any irregularities you observe on your network to the kea-users mailing list.
ISC strongly recommends making yourself familiar with the Kea performance optimization article.