System Test Report

Test Session: 2021-04-14--001

Ducktape Version: 0.8.1

Summary

TestsPassesFailuresIgnoredTime
8496520197316 minutes 40.380 seconds

Color Key

passfailignore

Failed Tests

TestDescriptionTimeDataDetail

Ignored Tests

TestDescriptionTimeDataDetail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce_at_start
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_shutdown",
  "sleep_time_secs": 0
}
        Start a smoke test client, then kill one particular broker immediately before streams stats
        Streams should throw an exception since it cannot create topics with the desired
        replication factor of 3
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.0.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.1.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.2.2",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.3.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.4.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.5.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.6.1",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "2.7.0",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "0.10.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "0.10.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "0.11.0.3"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "1.0.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "1.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.0.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.1.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.2.2"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.3.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.4.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.5.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.6.1"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "2.7.0"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_upgrade_downgrade_brokers
Arguments:
{
  "from_version": "dev",
  "to_version": "dev"
}
        Start a smoke test client then perform rolling upgrades on the broker.
        
0.000 seconds

Passed Tests

TestDescriptionTimeDataDetail
Module: kafkatest.tests.streams.streams_named_repartition_topic_test
Class:  StreamsNamedRepartitionTopicTest
Method: test_upgrade_topology_with_named_repartition_topic
    Tests using a named repartition topic by starting
    application then doing a rolling upgrade with added
    operations and the application still runs
    
2 minutes 4.935 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_consumer_throughput
Arguments:
{
  "compression_type": "none",
  "security_protocol": "PLAINTEXT"
}
        Consume 10e6 100-byte messages with 1 or more consumers from a topic with 6 partitions
        and report throughput.
        
1 minute 29.089 seconds
{
  "mb_per_sec": 94.6575,
  "records_per_sec": 992555.8313
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_consumer_throughput
Arguments:
{
  "compression_type": "snappy",
  "security_protocol": "PLAINTEXT"
}
        Consume 10e6 100-byte messages with 1 or more consumers from a topic with 6 partitions
        and report throughput.
        
1 minute 25.526 seconds
{
  "mb_per_sec": 161.5302,
  "records_per_sec": 1693766.9377
}
Detail
Module: kafkatest.tests.client.message_format_change_test
Class:  MessageFormatChangeTest
Method: test_compatibility
Arguments:
{
  "consumer_version": "0.10.2.2",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.10.2.2"
}
 This tests performs the following checks:
        The workload is a mix of 0.9.x, 0.10.x and 0.11.x producers and consumers
        that produce to and consume from a DEV_BRANCH cluster
        1. initially the topic is using message format 0.9.0
        2. change the message format version for topic to 0.10.0 on the fly.
        3. change the message format version for topic to 0.11.0 on the fly.
        4. change the message format version for topic back to 0.10.0 on the fly (only if the client version is 0.11.0 or newer)
        - The producers and consumers should not have any issue.

        Note regarding step number 4. Downgrading the message format version is generally unsupported as it breaks
        older clients. More concretely, if we downgrade a topic from 0.11.0 to 0.10.0 after it contains messages with
        version 0.11.0, we will return the 0.11.0 messages without down conversion due to an optimisation in the
        handling of fetch requests. This will break any consumer that doesn't support 0.11.0. So, in practice, step 4
        is similar to step 2 and it didn't seem worth it to increase the cluster size to in order to add a step 5 that
        would change the message format version for the topic back to 0.9.0.0.
        
3 minutes 44.542 seconds
Detail
Module: kafkatest.tests.client.message_format_change_test
Class:  MessageFormatChangeTest
Method: test_compatibility
Arguments:
{
  "consumer_version": "0.10.2.2",
  "metadata_quorum": "ZK",
  "producer_version": "0.10.2.2"
}
 This tests performs the following checks:
        The workload is a mix of 0.9.x, 0.10.x and 0.11.x producers and consumers
        that produce to and consume from a DEV_BRANCH cluster
        1. initially the topic is using message format 0.9.0
        2. change the message format version for topic to 0.10.0 on the fly.
        3. change the message format version for topic to 0.11.0 on the fly.
        4. change the message format version for topic back to 0.10.0 on the fly (only if the client version is 0.11.0 or newer)
        - The producers and consumers should not have any issue.

        Note regarding step number 4. Downgrading the message format version is generally unsupported as it breaks
        older clients. More concretely, if we downgrade a topic from 0.11.0 to 0.10.0 after it contains messages with
        version 0.11.0, we will return the 0.11.0 messages without down conversion due to an optimisation in the
        handling of fetch requests. This will break any consumer that doesn't support 0.11.0. So, in practice, step 4
        is similar to step 2 and it didn't seem worth it to increase the cluster size to in order to add a step 5 that
        would change the message format version for the topic back to 0.9.0.0.
        
3 minutes 52.969 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_consumer_throughput
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Consume 10e6 100-byte messages with 1 or more consumers from a topic with 6 partitions
        and report throughput.
        
2 minutes 1.268 seconds
{
  "mb_per_sec": 60.7397,
  "records_per_sec": 636902.1081
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_consumer_throughput
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Consume 10e6 100-byte messages with 1 or more consumers from a topic with 6 partitions
        and report throughput.
        
1 minute 39.442 seconds
{
  "mb_per_sec": 150.4693,
  "records_per_sec": 1577784.7902
}
Detail
Module: kafkatest.tests.client.message_format_change_test
Class:  MessageFormatChangeTest
Method: test_compatibility
Arguments:
{
  "consumer_version": "0.9.0.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.9.0.1"
}
 This tests performs the following checks:
        The workload is a mix of 0.9.x, 0.10.x and 0.11.x producers and consumers
        that produce to and consume from a DEV_BRANCH cluster
        1. initially the topic is using message format 0.9.0
        2. change the message format version for topic to 0.10.0 on the fly.
        3. change the message format version for topic to 0.11.0 on the fly.
        4. change the message format version for topic back to 0.10.0 on the fly (only if the client version is 0.11.0 or newer)
        - The producers and consumers should not have any issue.

        Note regarding step number 4. Downgrading the message format version is generally unsupported as it breaks
        older clients. More concretely, if we downgrade a topic from 0.11.0 to 0.10.0 after it contains messages with
        version 0.11.0, we will return the 0.11.0 messages without down conversion due to an optimisation in the
        handling of fetch requests. This will break any consumer that doesn't support 0.11.0. So, in practice, step 4
        is similar to step 2 and it didn't seem worth it to increase the cluster size to in order to add a step 5 that
        would change the message format version for the topic back to 0.9.0.0.
        
2 minutes 59.786 seconds
Detail
Module: kafkatest.tests.client.message_format_change_test
Class:  MessageFormatChangeTest
Method: test_compatibility
Arguments:
{
  "consumer_version": "0.9.0.1",
  "metadata_quorum": "ZK",
  "producer_version": "0.9.0.1"
}
 This tests performs the following checks:
        The workload is a mix of 0.9.x, 0.10.x and 0.11.x producers and consumers
        that produce to and consume from a DEV_BRANCH cluster
        1. initially the topic is using message format 0.9.0
        2. change the message format version for topic to 0.10.0 on the fly.
        3. change the message format version for topic to 0.11.0 on the fly.
        4. change the message format version for topic back to 0.10.0 on the fly (only if the client version is 0.11.0 or newer)
        - The producers and consumers should not have any issue.

        Note regarding step number 4. Downgrading the message format version is generally unsupported as it breaks
        older clients. More concretely, if we downgrade a topic from 0.11.0 to 0.10.0 after it contains messages with
        version 0.11.0, we will return the 0.11.0 messages without down conversion due to an optimisation in the
        handling of fetch requests. This will break any consumer that doesn't support 0.11.0. So, in practice, step 4
        is similar to step 2 and it didn't seem worth it to increase the cluster size to in order to add a step 5 that
        would change the message format version for the topic back to 0.9.0.0.
        
2 minutes 56.813 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_consumer_throughput
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Consume 10e6 100-byte messages with 1 or more consumers from a topic with 6 partitions
        and report throughput.
        
2 minutes 7.127 seconds
{
  "mb_per_sec": 56.049,
  "records_per_sec": 587716.7205
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_consumer_throughput
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Consume 10e6 100-byte messages with 1 or more consumers from a topic with 6 partitions
        and report throughput.
        
1 minute 33.128 seconds
{
  "mb_per_sec": 155.0186,
  "records_per_sec": 1625487.6463
}
Detail
Module: kafkatest.tests.client.message_format_change_test
Class:  MessageFormatChangeTest
Method: test_compatibility
Arguments:
{
  "consumer_version": "dev",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev"
}
 This tests performs the following checks:
        The workload is a mix of 0.9.x, 0.10.x and 0.11.x producers and consumers
        that produce to and consume from a DEV_BRANCH cluster
        1. initially the topic is using message format 0.9.0
        2. change the message format version for topic to 0.10.0 on the fly.
        3. change the message format version for topic to 0.11.0 on the fly.
        4. change the message format version for topic back to 0.10.0 on the fly (only if the client version is 0.11.0 or newer)
        - The producers and consumers should not have any issue.

        Note regarding step number 4. Downgrading the message format version is generally unsupported as it breaks
        older clients. More concretely, if we downgrade a topic from 0.11.0 to 0.10.0 after it contains messages with
        version 0.11.0, we will return the 0.11.0 messages without down conversion due to an optimisation in the
        handling of fetch requests. This will break any consumer that doesn't support 0.11.0. So, in practice, step 4
        is similar to step 2 and it didn't seem worth it to increase the cluster size to in order to add a step 5 that
        would change the message format version for the topic back to 0.9.0.0.
        
3 minutes 54.630 seconds
Detail
Module: kafkatest.tests.client.message_format_change_test
Class:  MessageFormatChangeTest
Method: test_compatibility
Arguments:
{
  "consumer_version": "dev",
  "metadata_quorum": "ZK",
  "producer_version": "dev"
}
 This tests performs the following checks:
        The workload is a mix of 0.9.x, 0.10.x and 0.11.x producers and consumers
        that produce to and consume from a DEV_BRANCH cluster
        1. initially the topic is using message format 0.9.0
        2. change the message format version for topic to 0.10.0 on the fly.
        3. change the message format version for topic to 0.11.0 on the fly.
        4. change the message format version for topic back to 0.10.0 on the fly (only if the client version is 0.11.0 or newer)
        - The producers and consumers should not have any issue.

        Note regarding step number 4. Downgrading the message format version is generally unsupported as it breaks
        older clients. More concretely, if we downgrade a topic from 0.11.0 to 0.10.0 after it contains messages with
        version 0.11.0, we will return the 0.11.0 messages without down conversion due to an optimisation in the
        handling of fetch requests. This will break any consumer that doesn't support 0.11.0. So, in practice, step 4
        is similar to step 2 and it didn't seem worth it to increase the cluster size to in order to add a step 5 that
        would change the message format version for the topic back to 0.9.0.0.
        
4 minutes 1.635 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "none",
  "security_protocol": "SASL_PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 50.307 seconds
{
  "latency_50th_ms": 1.0,
  "latency_999th_ms": 36.0,
  "latency_99th_ms": 16.0
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "snappy",
  "security_protocol": "SASL_PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 33.860 seconds
{
  "latency_50th_ms": 1.0,
  "latency_999th_ms": 28.0,
  "latency_99th_ms": 11.0
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "none",
  "security_protocol": "SASL_SSL"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
2 minutes 4.133 seconds
{
  "latency_50th_ms": 2.0,
  "latency_999th_ms": 27.0,
  "latency_99th_ms": 11.0
}
Detail
Module: kafkatest.tests.core.replica_scale_test
Class:  ReplicaScaleTest
Method: test_clean_bounce
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "partition_count": 34,
  "replication_factor": 3,
  "topic_count": 50
}
6 minutes 45.742 seconds
Detail
Module: kafkatest.tests.core.replica_scale_test
Class:  ReplicaScaleTest
Method: test_clean_bounce
Arguments:
{
  "metadata_quorum": "ZK",
  "partition_count": 34,
  "replication_factor": 3,
  "topic_count": 50
}
6 minutes 39.176 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "snappy",
  "security_protocol": "SASL_SSL"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
2 minutes 4.707 seconds
{
  "latency_50th_ms": 2.0,
  "latency_999th_ms": 26.0,
  "latency_99th_ms": 14.0
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_and_consumer
Arguments:
{
  "compression_type": "none",
  "security_protocol": "PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Concurrently produce and consume 10e6 messages with a single producer and a single consumer,

        Return aggregate throughput statistics for both producer and consumer.

        (Under the hood, this runs ProducerPerformance.java, and ConsumerPerformance.scala)
        
1 minute 17.379 seconds
{
  "consumer": {
    "mb_per_sec": 54.3528,
    "records_per_sec": 569930.4685
  },
  "producer": {
    "mb_per_sec": 53.12,
    "records_per_sec": 557040.998217
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_and_consumer
Arguments:
{
  "compression_type": "snappy",
  "security_protocol": "PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Concurrently produce and consume 10e6 messages with a single producer and a single consumer,

        Return aggregate throughput statistics for both producer and consumer.

        (Under the hood, this runs ProducerPerformance.java, and ConsumerPerformance.scala)
        
1 minute 15.115 seconds
{
  "consumer": {
    "mb_per_sec": 71.5166,
    "records_per_sec": 749906.2617
  },
  "producer": {
    "mb_per_sec": 66.96,
    "records_per_sec": 702148.574638
  }
}
Detail
Module: kafkatest.tests.core.replica_scale_test
Class:  ReplicaScaleTest
Method: test_produce_consume
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "partition_count": 34,
  "replication_factor": 3,
  "topic_count": 50
}
4 minutes 29.048 seconds
Detail
Module: kafkatest.tests.core.replica_scale_test
Class:  ReplicaScaleTest
Method: test_produce_consume
Arguments:
{
  "metadata_quorum": "ZK",
  "partition_count": 34,
  "replication_factor": 3,
  "topic_count": 50
}
4 minutes 31.082 seconds
Detail
Module: kafkatest.sanity_checks.test_kafka_version
Class:  KafkaVersionTest
Method: test_0_8_2
Test kafka service node-versioning api - verify that we can bring up a single-node 0.8.2.X cluster.
12.720 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_and_consumer
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Setup: 1 node zk + 3 node kafka cluster
        Concurrently produce and consume 10e6 messages with a single producer and a single consumer,

        Return aggregate throughput statistics for both producer and consumer.

        (Under the hood, this runs ProducerPerformance.java, and ConsumerPerformance.scala)
        
1 minute 46.958 seconds
{
  "consumer": {
    "mb_per_sec": 29.5392,
    "records_per_sec": 309741.366
  },
  "producer": {
    "mb_per_sec": 29.1,
    "records_per_sec": 305129.222226
  }
}
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "all",
  "metadata_quorum": "REMOTE_RAFT",
  "num_conflict_consumers": 1
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
44.121 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "all",
  "metadata_quorum": "ZK",
  "num_conflict_consumers": 1
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
55.745 seconds
Detail
Module: kafkatest.tests.core.security_test
Class:  SecurityTest
Method: test_quorum_ssl_endpoint_validation_failure
Arguments:
{
  "metadata_quorum": "ZK"
}
        Test that invalid hostname in ZooKeeper or Raft Controller results in broker inability to start.
        
1 minute 1.183 seconds
Detail
Module: kafkatest.tests.core.security_test
Class:  SecurityTest
Method: test_quorum_ssl_endpoint_validation_failure
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Test that invalid hostname in ZooKeeper or Raft Controller results in broker inability to start.
        
1 minute 13.460 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "stable",
  "metadata_quorum": "REMOTE_RAFT",
  "num_conflict_consumers": 1
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
50.748 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_and_consumer
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Setup: 1 node zk + 3 node kafka cluster
        Concurrently produce and consume 10e6 messages with a single producer and a single consumer,

        Return aggregate throughput statistics for both producer and consumer.

        (Under the hood, this runs ProducerPerformance.java, and ConsumerPerformance.scala)
        
1 minute 22.954 seconds
{
  "consumer": {
    "mb_per_sec": 74.02,
    "records_per_sec": 776156.4731
  },
  "producer": {
    "mb_per_sec": 69.52,
    "records_per_sec": 728969.237498
  }
}
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "stable",
  "metadata_quorum": "ZK",
  "num_conflict_consumers": 1
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
58.614 seconds
Detail
Module: kafkatest.sanity_checks.test_bounce
Class:  TestBounce
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT"
}
        Test that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages both before and after a subsequent roll.
        
54.725 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "all",
  "metadata_quorum": "REMOTE_RAFT",
  "num_conflict_consumers": 2
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
48.300 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "all",
  "metadata_quorum": "ZK",
  "num_conflict_consumers": 2
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
57.810 seconds
Detail
Module: kafkatest.sanity_checks.test_bounce
Class:  TestBounce
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "ZK"
}
        Test that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages both before and after a subsequent roll.
        
48.886 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "stable",
  "metadata_quorum": "REMOTE_RAFT",
  "num_conflict_consumers": 2
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
52.176 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_and_consumer
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Setup: 1 node zk + 3 node kafka cluster
        Concurrently produce and consume 10e6 messages with a single producer and a single consumer,

        Return aggregate throughput statistics for both producer and consumer.

        (Under the hood, this runs ProducerPerformance.java, and ConsumerPerformance.scala)
        
1 minute 46.483 seconds
{
  "consumer": {
    "mb_per_sec": 29.3827,
    "records_per_sec": 308099.9476
  },
  "producer": {
    "mb_per_sec": 28.46,
    "records_per_sec": 298373.86245
  }
}
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "security_protocol": "SASL_PLAINTEXT"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
29.546 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_fencing_static_consumer
Arguments:
{
  "fencing_stage": "stable",
  "metadata_quorum": "ZK",
  "num_conflict_consumers": 2
}
        Verify correct static consumer behavior when there are conflicting consumers with same group.instance.id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static members and wait until they've joined the group. Some conflict consumers will be configured with
        - the same group.instance.id.
        - Let normal consumers and fencing consumers start at the same time, and expect only unique consumers left.
        
1 minute 1.295 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_PLAINTEXT"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
37.851 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_and_consumer
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Setup: 1 node zk + 3 node kafka cluster
        Concurrently produce and consume 10e6 messages with a single producer and a single consumer,

        Return aggregate throughput statistics for both producer and consumer.

        (Under the hood, this runs ProducerPerformance.java, and ConsumerPerformance.scala)
        
1 minute 23.433 seconds
{
  "consumer": {
    "mb_per_sec": 77.4338,
    "records_per_sec": 811951.9324
  },
  "producer": {
    "mb_per_sec": 70.35,
    "records_per_sec": 737626.318507
  }
}
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "security_protocol": "SASL_SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
35.371 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_consume_bench
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "topics": [
    "consume_bench_topic[0-5]:[0-4]"
  ]
}
        Runs a ConsumeBench workload to consume messages
        
1 minute 58.175 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
41.046 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_consume_bench
Arguments:
{
  "metadata_quorum": "ZK",
  "topics": [
    "consume_bench_topic[0-5]:[0-4]"
  ]
}
        Runs a ConsumeBench workload to consume messages
        
1 minute 45.123 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "sasl_mechanism": "PLAIN",
  "security_protocol": "SASL_SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
37.039 seconds
Detail
Module: kafkatest.sanity_checks.test_bounce
Class:  TestBounce
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Test that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages both before and after a subsequent roll.
        
1 minute 38.492 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "sasl_mechanism": "PLAIN",
  "security_protocol": "SASL_SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
41.960 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_consume_bench
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "topics": [
    "consume_bench_topic[0-5]"
  ]
}
        Runs a ConsumeBench workload to consume messages
        
1 minute 57.513 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  AssignmentValidationTest
Method: test_valid_assignment
Arguments:
{
  "assignment_strategy": "org.apache.kafka.clients.consumer.RangeAssignor",
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify assignment strategy correctness: each partition is assigned to exactly
        one consumer instance.

        Setup: single Kafka cluster with a set of consumers in the same group.

        - Start the consumers one by one
        - Validate assignment after every expected rebalance
        
45.626 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "sasl_mechanism": "SCRAM-SHA-256",
  "security_protocol": "SASL_SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
39.891 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_consume_bench
Arguments:
{
  "metadata_quorum": "ZK",
  "topics": [
    "consume_bench_topic[0-5]"
  ]
}
        Runs a ConsumeBench workload to consume messages
        
1 minute 48.006 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  AssignmentValidationTest
Method: test_valid_assignment
Arguments:
{
  "assignment_strategy": "org.apache.kafka.clients.consumer.RangeAssignor",
  "metadata_quorum": "ZK"
}
        Verify assignment strategy correctness: each partition is assigned to exactly
        one consumer instance.

        Setup: single Kafka cluster with a set of consumers in the same group.

        - Start the consumers one by one
        - Validate assignment after every expected rebalance
        
54.802 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "sasl_mechanism": "SCRAM-SHA-512",
  "security_protocol": "SASL_SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
39.730 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_version
Check that console consumer v0.8.2.X successfully starts and consumes messages.
27.499 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  AssignmentValidationTest
Method: test_valid_assignment
Arguments:
{
  "assignment_strategy": "org.apache.kafka.clients.consumer.RoundRobinAssignor",
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify assignment strategy correctness: each partition is assigned to exactly
        one consumer instance.

        Setup: single Kafka cluster with a set of consumers in the same group.

        - Start the consumers one by one
        - Validate assignment after every expected rebalance
        
43.719 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_multiple_consumers_random_group_partitions
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Runs multiple consumers in to read messages from specific partitions.
        Since a consumerGroup isn't specified, each consumer will get assigned a random group
        and consume from all partitions
        
1 minute 58.322 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_multiple_consumers_random_group_partitions
Arguments:
{
  "metadata_quorum": "ZK"
}
        Runs multiple consumers in to read messages from specific partitions.
        Since a consumerGroup isn't specified, each consumer will get assigned a random group
        and consume from all partitions
        
1 minute 46.601 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_multiple_raft_sasl_mechanisms
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Test for remote Raft cases that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages.  The inter-controller and broker-to-controller
        security protocols are both SASL_PLAINTEXT but the SASL mechanisms are different (we set
        GSSAPI for the inter-controller mechanism and PLAIN for the broker-to-controller mechanism).
        This test differs from the above tests -- he ones above used the same SASL mechanism for both paths.
        
47.801 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  AssignmentValidationTest
Method: test_valid_assignment
Arguments:
{
  "assignment_strategy": "org.apache.kafka.clients.consumer.RoundRobinAssignor",
  "metadata_quorum": "ZK"
}
        Verify assignment strategy correctness: each partition is assigned to exactly
        one consumer instance.

        Setup: single Kafka cluster with a set of consumers in the same group.

        - Start the consumers one by one
        - Validate assignment after every expected rebalance
        
53.500 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_multiple_raft_security_protocols
Arguments:
{
  "inter_broker_security_protocol": "PLAINTEXT",
  "metadata_quorum": "REMOTE_RAFT"
}
        Test for remote Raft cases that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages.  The inter-controller and broker-to-controller
        security protocols are defined to be different (which differs from the above test, where they were the same).
        
47.812 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  AssignmentValidationTest
Method: test_valid_assignment
Arguments:
{
  "assignment_strategy": "org.apache.kafka.clients.consumer.StickyAssignor",
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify assignment strategy correctness: each partition is assigned to exactly
        one consumer instance.

        Setup: single Kafka cluster with a set of consumers in the same group.

        - Start the consumers one by one
        - Validate assignment after every expected rebalance
        
45.627 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_multiple_consumers_random_group_topics
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Runs multiple consumers group to read messages from topics.
        Since a consumerGroup isn't specified, each consumer should read from all topics independently
        
1 minute 44.195 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_multiple_consumers_random_group_topics
Arguments:
{
  "metadata_quorum": "ZK"
}
        Runs multiple consumers group to read messages from topics.
        Since a consumerGroup isn't specified, each consumer should read from all topics independently
        
1 minute 33.850 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_multiple_raft_security_protocols
Arguments:
{
  "inter_broker_sasl_mechanism": "GSSAPI",
  "inter_broker_security_protocol": "SASL_SSL",
  "metadata_quorum": "REMOTE_RAFT"
}
        Test for remote Raft cases that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages.  The inter-controller and broker-to-controller
        security protocols are defined to be different (which differs from the above test, where they were the same).
        
51.180 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  AssignmentValidationTest
Method: test_valid_assignment
Arguments:
{
  "assignment_strategy": "org.apache.kafka.clients.consumer.StickyAssignor",
  "metadata_quorum": "ZK"
}
        Verify assignment strategy correctness: each partition is assigned to exactly
        one consumer instance.

        Setup: single Kafka cluster with a set of consumers in the same group.

        - Start the consumers one by one
        - Validate assignment after every expected rebalance
        
53.171 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_multiple_raft_security_protocols
Arguments:
{
  "inter_broker_sasl_mechanism": "PLAIN",
  "inter_broker_security_protocol": "SASL_SSL",
  "metadata_quorum": "REMOTE_RAFT"
}
        Test for remote Raft cases that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages.  The inter-controller and broker-to-controller
        security protocols are defined to be different (which differs from the above test, where they were the same).
        
48.446 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_multiple_consumers_specified_group_partitions_should_raise
Arguments:
{
  "metadata_quorum": "ZK"
}
        Runs multiple consumers in the same group to read messages from specific partitions.
        It is an invalid configuration to provide a consumer group and specific partitions.
        
1 minute 42.761 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_multiple_raft_security_protocols
Arguments:
{
  "inter_broker_security_protocol": "SSL",
  "metadata_quorum": "REMOTE_RAFT"
}
        Test for remote Raft cases that we can start VerifiableProducer on the current branch snapshot version, and
        verify that we can produce a small number of messages.  The inter-controller and broker-to-controller
        security protocols are defined to be different (which differs from the above test, where they were the same).
        
51.783 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_multiple_consumers_specified_group_partitions_should_raise
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Runs multiple consumers in the same group to read messages from specific partitions.
        It is an invalid configuration to provide a consumer group and specific partitions.
        
1 minute 55.953 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "producer_version": "dev",
  "sasl_mechanism": "GSSAPI",
  "security_protocol": "SASL_SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
46.889 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_single_partition
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Run a ConsumeBench against a single partition
        
1 minute 43.536 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_single_partition
Arguments:
{
  "metadata_quorum": "ZK"
}
        Run a ConsumeBench against a single partition
        
1 minute 31.831 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "sasl_mechanism": "GSSAPI",
  "security_protocol": "SASL_SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
54.543 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "sasl_mechanism": "GSSAPI",
  "security_protocol": "SASL_SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
43.895 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "producer_version": "dev",
  "sasl_mechanism": "PLAIN",
  "security_protocol": "SASL_SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
43.217 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_two_consumers_specified_group_topics
Arguments:
{
  "metadata_quorum": "ZK"
}
        Runs two consumers in the same consumer group to read messages from topics.
        Since a consumerGroup is specified, each consumer should dynamically get assigned a partition from group
        
1 minute 34.982 seconds
Detail
Module: kafkatest.tests.core.consume_bench_test
Class:  ConsumeBenchTest
Method: test_two_consumers_specified_group_topics
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Runs two consumers in the same consumer group to read messages from topics.
        Since a consumerGroup is specified, each consumer should dynamically get assigned a partition from group
        
1 minute 50.232 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "sasl_mechanism": "PLAIN",
  "security_protocol": "SASL_SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
53.988 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_bounce
Arguments:
{
  "clean": false,
  "connect_protocol": "compatible"
}
        Validates that source and sink tasks that run continuously and produce a predictable sequence of messages
        run correctly and deliver messages exactly once when Kafka Connect workers undergo clean rolling bounces.
        
6 minutes 7.342 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "sasl_mechanism": "PLAIN",
  "security_protocol": "SASL_SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
44.346 seconds
Detail
Module: kafkatest.tests.core.group_mode_transactions_test
Class:  GroupModeTransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "failure_mode": "clean_bounce"
}
Essentially testing the same functionality as TransactionsTest by transactionally copying data
    from a source topic to a destination topic and killing the copy process as well as the broker
    randomly through the process. The major difference is that we choose to work as a collaborated
    group with same topic subscription instead of individual copiers.

    In the end we verify that the final output topic contains exactly one committed copy of
    each message from the original producer.
    
1 minute 56.973 seconds
Detail
Module: kafkatest.tests.client.consumer_rolling_upgrade_test
Class:  ConsumerRollingUpgradeTest
Method: rolling_update_test
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify rolling updates of partition assignment strategies works correctly. In this
        test, we use a rolling restart to change the group's assignment strategy from "range" 
        to "roundrobin." We verify after every restart that all members are still in the group
        and that the correct assignment strategy was used.
        
39.624 seconds
Detail
Module: kafkatest.tests.core.group_mode_transactions_test
Class:  GroupModeTransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "failure_mode": "clean_bounce"
}
Essentially testing the same functionality as TransactionsTest by transactionally copying data
    from a source topic to a destination topic and killing the copy process as well as the broker
    randomly through the process. The major difference is that we choose to work as a collaborated
    group with same topic subscription instead of individual copiers.

    In the end we verify that the final output topic contains exactly one committed copy of
    each message from the original producer.
    
2 minutes 11.137 seconds
Detail
Module: kafkatest.tests.client.consumer_rolling_upgrade_test
Class:  ConsumerRollingUpgradeTest
Method: rolling_update_test
Arguments:
{
  "metadata_quorum": "ZK"
}
        Verify rolling updates of partition assignment strategies works correctly. In this
        test, we use a rolling restart to change the group's assignment strategy from "range" 
        to "roundrobin." We verify after every restart that all members are still in the group
        and that the correct assignment strategy was used.
        
34.992 seconds
Detail
Module: kafkatest.tests.client.pluggable_test
Class:  PluggableConsumerTest
Method: test_start_stop
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Test that a pluggable VerifiableConsumer module load works
        
27.174 seconds
Detail
Module: kafkatest.tests.client.pluggable_test
Class:  PluggableConsumerTest
Method: test_start_stop
Arguments:
{
  "metadata_quorum": "ZK"
}
        Test that a pluggable VerifiableConsumer module load works
        
20.468 seconds
Detail
Module: kafkatest.tests.connect.connect_rest_test
Class:  ConnectRestApiTest
Method: test_rest_api
Arguments:
{
  "connect_protocol": "compatible"
}
    Test of Kafka Connect's REST API endpoints.
    
46.870 seconds
Detail
Module: kafkatest.tests.core.group_mode_transactions_test
Class:  GroupModeTransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "failure_mode": "hard_bounce"
}
Essentially testing the same functionality as TransactionsTest by transactionally copying data
    from a source topic to a destination topic and killing the copy process as well as the broker
    randomly through the process. The major difference is that we choose to work as a collaborated
    group with same topic subscription instead of individual copiers.

    In the end we verify that the final output topic contains exactly one committed copy of
    each message from the original producer.
    
2 minutes 7.954 seconds
Detail
Module: kafkatest.tests.core.group_mode_transactions_test
Class:  GroupModeTransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "failure_mode": "hard_bounce"
}
Essentially testing the same functionality as TransactionsTest by transactionally copying data
    from a source topic to a destination topic and killing the copy process as well as the broker
    randomly through the process. The major difference is that we choose to work as a collaborated
    group with same topic subscription instead of individual copiers.

    In the end we verify that the final output topic contains exactly one committed copy of
    each message from the original producer.
    
2 minutes 54.714 seconds
Detail
Module: kafkatest.tests.connect.connect_rest_test
Class:  ConnectRestApiTest
Method: test_rest_api
Arguments:
{
  "connect_protocol": "eager"
}
    Test of Kafka Connect's REST API endpoints.
    
46.808 seconds
Detail
Module: kafkatest.tests.core.get_offset_shell_test
Class:  GetOffsetShellTest
Method: test_get_offset_shell
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Tests if GetOffsetShell is getting offsets correctly
        :return: None
        
31.771 seconds
Detail
Module: kafkatest.tests.core.get_offset_shell_test
Class:  GetOffsetShellTest
Method: test_get_offset_shell
Arguments:
{
  "metadata_quorum": "ZK"
}
        Tests if GetOffsetShell is getting offsets correctly
        :return: None
        
27.862 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "0.11.0.3"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.783 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "1.0.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.368 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_bounce
Arguments:
{
  "clean": false,
  "connect_protocol": "eager"
}
        Validates that source and sink tasks that run continuously and produce a predictable sequence of messages
        run correctly and deliver messages exactly once when Kafka Connect workers undergo clean rolling bounces.
        
6 minutes 3.996 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "1.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.187 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.0.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.966 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
32.002 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.2.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.513 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.3.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.860 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.4.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
33.107 seconds
Detail
Module: kafkatest.tests.core.throttling_test
Class:  ThrottlingTest
Method: test_throttled_reassignment
Arguments:
{
  "bounce_brokers": false
}
Tests throttled partition reassignment. This is essentially similar
    to the reassign_partitions_test, except that we throttle the reassignment
    and verify that it takes a sensible amount of time given the throttle
    and the amount of data being moved.

    Since the correctness is time dependent, this test also simplifies the
    cluster topology. In particular, we fix the number of brokers, the
    replication-factor, the number of partitions, the partition size, and
    the number of partitions being moved so that we can accurately predict
    the time throttled reassignment should take.
    
6 minutes 23.471 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.5.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
32.916 seconds
Detail
Module: kafkatest.tests.core.throttling_test
Class:  ThrottlingTest
Method: test_throttled_reassignment
Arguments:
{
  "bounce_brokers": true
}
Tests throttled partition reassignment. This is essentially similar
    to the reassign_partitions_test, except that we throttle the reassignment
    and verify that it takes a sensible amount of time given the throttle
    and the amount of data being moved.

    Since the correctness is time dependent, this test also simplifies the
    cluster topology. In particular, we fix the number of brokers, the
    replication-factor, the number of partitions, the partition size, and
    the number of partitions being moved so that we can accurately predict
    the time throttled reassignment should take.
    
6 minutes 12.593 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_alpha_enabled
Arguments:
{
  "broker_version": "2.6.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
32.124 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "none",
  "security_protocol": "PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 24.884 seconds
{
  "latency_50th_ms": 1.0,
  "latency_999th_ms": 18.0,
  "latency_99th_ms": 8.0
}
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "0.10.0.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 15.840 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_bounce
Arguments:
{
  "clean": false,
  "connect_protocol": "sessioned"
}
        Validates that source and sink tasks that run continuously and produce a predictable sequence of messages
        run correctly and deliver messages exactly once when Kafka Connect workers undergo clean rolling bounces.
        
6 minutes 0.844 seconds
Detail
Module: kafkatest.tests.streams.streams_standby_replica_test
Class:  StreamsStandbyTask
Method: test_standby_tasks_rebalance
    This test validates using standby tasks helps with rebalance times
    additionally verifies standby replicas continue to work in the
    face of continual changes to streams code base
    
2 minutes 43.266 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "snappy",
  "security_protocol": "PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 22.989 seconds
{
  "latency_50th_ms": 1.0,
  "latency_999th_ms": 17.0,
  "latency_99th_ms": 7.0
}
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "0.10.1.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 16.555 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "0.10.2.2"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 18.043 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "0.11.0.3"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 19.005 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "1.0.2"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 21.364 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_bounce
Arguments:
{
  "clean": true,
  "connect_protocol": "compatible"
}
        Validates that source and sink tasks that run continuously and produce a predictable sequence of messages
        run correctly and deliver messages exactly once when Kafka Connect workers undergo clean rolling bounces.
        
5 minutes 52.236 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "1.1.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 20.541 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_bounce
Arguments:
{
  "clean": true,
  "connect_protocol": "eager"
}
        Validates that source and sink tasks that run continuously and produce a predictable sequence of messages
        run correctly and deliver messages exactly once when Kafka Connect workers undergo clean rolling bounces.
        
6 minutes 0.691 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.0.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 22.588 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_file_source_and_sink
Arguments:
{
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Tests that a basic file connector works across clean rolling bounces. This validates that the connector is
        correctly created, tasks instantiated, and as nodes restart the work is rebalanced across nodes.
        
2 minutes 4.879 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.1.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 22.408 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.2.2"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 22.979 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_file_source_and_sink
Arguments:
{
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Tests that a basic file connector works across clean rolling bounces. This validates that the connector is
        correctly created, tasks instantiated, and as nodes restart the work is rebalanced across nodes.
        
1 minute 13.979 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_file_source_and_sink
Arguments:
{
  "connect_protocol": "sessioned",
  "security_protocol": "PLAINTEXT"
}
        Tests that a basic file connector works across clean rolling bounces. This validates that the connector is
        correctly created, tasks instantiated, and as nodes restart the work is rebalanced across nodes.
        
1 minute 11.971 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.3.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 24.820 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_bounce
Arguments:
{
  "clean": true,
  "connect_protocol": "sessioned"
}
        Validates that source and sink tasks that run continuously and produce a predictable sequence of messages
        run correctly and deliver messages exactly once when Kafka Connect workers undergo clean rolling bounces.
        
5 minutes 54.368 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.4.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 24.500 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_file_source_and_sink
Arguments:
{
  "connect_protocol": "compatible",
  "security_protocol": "SASL_SSL"
}
        Tests that a basic file connector works across clean rolling bounces. This validates that the connector is
        correctly created, tasks instantiated, and as nodes restart the work is rebalanced across nodes.
        
1 minute 44.418 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_file_source_and_sink
Arguments:
{
  "connect_protocol": "eager",
  "security_protocol": "SASL_SSL"
}
        Tests that a basic file connector works across clean rolling bounces. This validates that the connector is
        correctly created, tasks instantiated, and as nodes restart the work is rebalanced across nodes.
        
1 minute 41.373 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.5.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 24.492 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.6.1"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 27.959 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_transformations
Arguments:
{
  "connect_protocol": "compatible"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
59.502 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_transformations
Arguments:
{
  "connect_protocol": "eager"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
1 minute 1.042 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_file_source_and_sink
Arguments:
{
  "connect_protocol": "sessioned",
  "security_protocol": "SASL_SSL"
}
        Tests that a basic file connector works across clean rolling bounces. This validates that the connector is
        correctly created, tasks instantiated, and as nodes restart the work is rebalanced across nodes.
        
2 minutes 38.103 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "2.7.0"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 26.937 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_transformations
Arguments:
{
  "connect_protocol": "sessioned"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
1 minute 4.001 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "dev",
  "metadata_quorum": "REMOTE_RAFT"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 27.246 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_file_source_and_sink
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
        Validates basic end-to-end functionality of Connect standalone using the file source and sink converters. Includes
        parameterizations to test different converters (which also test per-connector converter overrides), schema/schemaless
        modes, and security support.
        
1 minute 38.331 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_file_source_and_sink
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
        Validates basic end-to-end functionality of Connect standalone using the file source and sink converters. Includes
        parameterizations to test different converters (which also test per-connector converter overrides), schema/schemaless
        modes, and security support.
        
1 minute 29.370 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_produce_consume_test
Class:  ClientCompatibilityProduceConsumeTest
Method: test_produce_consume
Arguments:
{
  "broker_version": "dev",
  "metadata_quorum": "ZK"
}
    These tests validate that we can use a new client to produce and consume from older brokers.
    
2 minutes 26.568 seconds
Detail
Module: kafkatest.tests.core.fetch_from_follower_test
Class:  FetchFromFollowerTest
Method: test_consumer_preferred_read_replica
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        This test starts up brokers with "broker.rack" and "replica.selector.class" configurations set. The replica
        selector is set to the rack-aware implementation. One of the brokers has a different rack than the other two.
        We then use a console consumer with the "client.rack" set to the same value as the differing broker. After
        producing some records, we verify that the client has been informed of the preferred replica and that all the
        records are properly consumed.
        
2 minutes 10.656 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.10.0.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.10.0.1",
  "timestamp_type": "LogAppendTime"
}
1 minute 45.943 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.10.0.1",
  "metadata_quorum": "ZK",
  "producer_version": "0.10.0.1",
  "timestamp_type": "LogAppendTime"
}
1 minute 47.513 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.10.1.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.10.1.1",
  "timestamp_type": "LogAppendTime"
}
1 minute 50.910 seconds
Detail
Module: kafkatest.tests.core.fetch_from_follower_test
Class:  FetchFromFollowerTest
Method: test_consumer_preferred_read_replica
Arguments:
{
  "metadata_quorum": "ZK"
}
        This test starts up brokers with "broker.rack" and "replica.selector.class" configurations set. The replica
        selector is set to the rack-aware implementation. One of the brokers has a different rack than the other two.
        We then use a console consumer with the "client.rack" set to the same value as the differing broker. After
        producing some records, we verify that the client has been informed of the preferred replica and that all the
        records are properly consumed.
        
2 minutes 17.728 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.10.1.1",
  "metadata_quorum": "ZK",
  "producer_version": "0.10.1.1",
  "timestamp_type": "LogAppendTime"
}
1 minute 49.660 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "consumer_version": "0.10.2.2",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.10.2.2",
  "timestamp_type": "CreateTime"
}
1 minute 53.693 seconds
Detail
Module: kafkatest.tests.core.log_dir_failure_test
Class:  LogDirFailureTest
Method: test_replication_with_disk_failure
Arguments:
{
  "bounce_broker": false,
  "broker_type": "follower",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2
               and another topic with partitions=3, replication-factor=3, and min.insync.replicas=1
            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 12.536 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "consumer_version": "0.10.2.2",
  "metadata_quorum": "ZK",
  "producer_version": "0.10.2.2",
  "timestamp_type": "CreateTime"
}
1 minute 50.140 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "gzip"
  ],
  "consumer_version": "0.11.0.3",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.11.0.3",
  "timestamp_type": "CreateTime"
}
1 minute 37.432 seconds
Detail
Module: kafkatest.tests.core.log_dir_failure_test
Class:  LogDirFailureTest
Method: test_replication_with_disk_failure
Arguments:
{
  "bounce_broker": false,
  "broker_type": "leader",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2
               and another topic with partitions=3, replication-factor=3, and min.insync.replicas=1
            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 7.970 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "gzip"
  ],
  "consumer_version": "0.11.0.3",
  "metadata_quorum": "ZK",
  "producer_version": "0.11.0.3",
  "timestamp_type": "CreateTime"
}
1 minute 49.993 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "0.8.2.2",
  "new_consumer": false,
  "producer_version": "0.8.2.2",
  "timestamp_type": null
}
1 minute 43.293 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.9.0.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.9.0.1",
  "timestamp_type": "LogAppendTime"
}
1 minute 26.296 seconds
Detail
Module: kafkatest.tests.core.log_dir_failure_test
Class:  LogDirFailureTest
Method: test_replication_with_disk_failure
Arguments:
{
  "bounce_broker": true,
  "broker_type": "follower",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2
               and another topic with partitions=3, replication-factor=3, and min.insync.replicas=1
            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 16.354 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.9.0.1",
  "metadata_quorum": "ZK",
  "producer_version": "0.9.0.1",
  "timestamp_type": "LogAppendTime"
}
1 minute 41.477 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.9.0.1",
  "timestamp_type": null
}
1 minute 39.334 seconds
Detail
Module: kafkatest.tests.core.log_dir_failure_test
Class:  LogDirFailureTest
Method: test_replication_with_disk_failure
Arguments:
{
  "bounce_broker": true,
  "broker_type": "leader",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2
               and another topic with partitions=3, replication-factor=3, and min.insync.replicas=1
            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 16.964 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "ZK",
  "producer_version": "0.9.0.1",
  "timestamp_type": null
}
1 minute 42.229 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "0.9.0.1",
  "timestamp_type": null
}
1 minute 39.591 seconds
Detail
Module: kafkatest.tests.core.round_trip_fault_test
Class:  RoundTripFaultTest
Method: test_produce_consume_with_broker_pause
3 minutes 48.904 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "ZK",
  "producer_version": "0.9.0.1",
  "timestamp_type": null
}
1 minute 47.746 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "1.0.2",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "1.0.2",
  "timestamp_type": "CreateTime"
}
1 minute 37.180 seconds
Detail
Module: kafkatest.tests.core.round_trip_fault_test
Class:  RoundTripFaultTest
Method: test_produce_consume_with_client_partition
3 minutes 21.986 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "1.0.2",
  "metadata_quorum": "ZK",
  "producer_version": "1.0.2",
  "timestamp_type": "CreateTime"
}
1 minute 41.498 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "consumer_version": "1.1.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "1.1.1",
  "timestamp_type": "CreateTime"
}
1 minute 35.015 seconds
Detail
Module: kafkatest.tests.core.round_trip_fault_test
Class:  RoundTripFaultTest
Method: test_produce_consume_with_latency
3 minutes 18.232 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "consumer_version": "1.1.1",
  "metadata_quorum": "ZK",
  "producer_version": "1.1.1",
  "timestamp_type": "CreateTime"
}
1 minute 36.280 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "2.0.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.0.1",
  "timestamp_type": "CreateTime"
}
1 minute 33.846 seconds
Detail
Module: kafkatest.tests.core.round_trip_fault_test
Class:  RoundTripFaultTest
Method: test_round_trip_workload
2 minutes 17.405 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "2.0.1",
  "metadata_quorum": "ZK",
  "producer_version": "2.0.1",
  "timestamp_type": "CreateTime"
}
1 minute 42.083 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "consumer_version": "2.1.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.1.1",
  "timestamp_type": "CreateTime"
}
1 minute 32.022 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.2.2",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.2.2",
  "timestamp_type": "CreateTime"
}
1 minute 35.155 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "consumer_version": "2.1.1",
  "metadata_quorum": "ZK",
  "producer_version": "2.1.1",
  "timestamp_type": "CreateTime"
}
1 minute 43.792 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_disable_separate_interbroker_listener
        Start with a cluster that has two listeners, one on SSL (clients), another on SASL_SSL (broker-to-broker).
        Start producer and consumer on SSL listener.
        Close dedicated interbroker listener via rolling restart.
        Ensure we can produce and consume via SSL listener throughout.
        
3 minutes 21.168 seconds
Detail
Module: kafkatest.tests.core.round_trip_fault_test
Class:  RoundTripFaultTest
Method: test_round_trip_workload_with_broker_partition
4 minutes 5.558 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.3.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.3.1",
  "timestamp_type": "CreateTime"
}
1 minute 26.002 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.2.2",
  "metadata_quorum": "ZK",
  "producer_version": "2.2.2",
  "timestamp_type": "CreateTime"
}
1 minute 35.612 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.3.1",
  "metadata_quorum": "ZK",
  "producer_version": "2.3.1",
  "timestamp_type": "CreateTime"
}
1 minute 37.013 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.4.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.4.1",
  "timestamp_type": "CreateTime"
}
1 minute 31.412 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.4.1",
  "metadata_quorum": "ZK",
  "producer_version": "2.4.1",
  "timestamp_type": "CreateTime"
}
1 minute 37.261 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.5.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.5.1",
  "timestamp_type": "CreateTime"
}
1 minute 36.906 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_one
Arguments:
{
  "client_protocol": "SASL_PLAINTEXT"
}
        Start with a PLAINTEXT cluster, open a SECURED port, via a rolling upgrade, ensuring we could produce
        and consume throughout over PLAINTEXT. Finally check we can produce and consume the new secured port.
        
4 minutes 5.343 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_enable_separate_interbroker_listener
        Start with a cluster that has a single PLAINTEXT listener.
        Start producing/consuming on PLAINTEXT port.
        While doing that, do a rolling restart to enable separate secured interbroker port
        
4 minutes 40.060 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.5.1",
  "metadata_quorum": "ZK",
  "producer_version": "2.5.1",
  "timestamp_type": "CreateTime"
}
1 minute 37.839 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.6.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.6.1",
  "timestamp_type": "CreateTime"
}
1 minute 38.153 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.6.1",
  "metadata_quorum": "ZK",
  "producer_version": "2.6.1",
  "timestamp_type": "CreateTime"
}
1 minute 36.999 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.7.0",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "2.7.0",
  "timestamp_type": "CreateTime"
}
1 minute 39.720 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_one
Arguments:
{
  "client_protocol": "SASL_SSL"
}
        Start with a PLAINTEXT cluster, open a SECURED port, via a rolling upgrade, ensuring we could produce
        and consume throughout over PLAINTEXT. Finally check we can produce and consume the new secured port.
        
4 minutes 17.398 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "0.9.0.1",
  "new_consumer": false,
  "producer_version": "dev",
  "timestamp_type": null
}
1 minute 40.529 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "2.7.0",
  "metadata_quorum": "ZK",
  "producer_version": "2.7.0",
  "timestamp_type": "CreateTime"
}
1 minute 47.105 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_sasl_mechanism_phase_one
Arguments:
{
  "new_client_sasl_mechanism": "PLAIN"
}
        Start with a SASL/GSSAPI cluster, add new SASL mechanism, via a rolling upgrade, ensuring we could produce
        and consume throughout over SASL/GSSAPI. Finally check we can produce and consume using new mechanism.
        
4 minutes 52.038 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": false,
  "failure_mode": "clean_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 49.822 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.9.0.1",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "timestamp_type": "CreateTime"
}
1 minute 28.136 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "0.9.0.1",
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "timestamp_type": "CreateTime"
}
1 minute 35.855 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": false,
  "failure_mode": "clean_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 46.156 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": true,
  "failure_mode": "clean_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 43.278 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "timestamp_type": "LogAppendTime"
}
1 minute 36.461 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "none"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "timestamp_type": "LogAppendTime"
}
1 minute 36.992 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": true,
  "failure_mode": "clean_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 37.631 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "timestamp_type": "LogAppendTime"
}
1 minute 30.579 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": false,
  "failure_mode": "clean_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 51.365 seconds
Detail
Module: kafkatest.tests.core.compatibility_test_new_broker_test
Class:  ClientCompatibilityTestNewBroker
Method: test_compatibility
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "consumer_version": "dev",
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "timestamp_type": "LogAppendTime"
}
1 minute 40.865 seconds
Detail
Module: kafkatest.tests.core.security_test
Class:  SecurityTest
Method: test_client_ssl_endpoint_validation_failure
Arguments:
{
  "interbroker_security_protocol": "SSL",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
        Test that invalid hostname in certificate results in connection failures.
        When security_protocol=SSL, client SSL handshakes are expected to fail due to hostname verification failure.
        When security_protocol=PLAINTEXT and interbroker_security_protocol=SSL, controller connections fail
        with hostname verification failure. Since metadata cannot be propagated in the cluster without a valid certificate,
        the broker's metadata caches will be empty. Hence we expect Metadata requests to fail with an INVALID_REPLICATION_FACTOR
        error since the broker will attempt to create the topic automatically as it does not exist in the metadata cache,
        and there will be no online brokers.
        
49.058 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": false,
  "failure_mode": "clean_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 50.230 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": true,
  "failure_mode": "clean_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 29.379 seconds
Detail
Module: kafkatest.tests.core.security_test
Class:  SecurityTest
Method: test_client_ssl_endpoint_validation_failure
Arguments:
{
  "interbroker_security_protocol": "SSL",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
        Test that invalid hostname in certificate results in connection failures.
        When security_protocol=SSL, client SSL handshakes are expected to fail due to hostname verification failure.
        When security_protocol=PLAINTEXT and interbroker_security_protocol=SSL, controller connections fail
        with hostname verification failure. Since metadata cannot be propagated in the cluster without a valid certificate,
        the broker's metadata caches will be empty. Hence we expect Metadata requests to fail with an INVALID_REPLICATION_FACTOR
        error since the broker will attempt to create the topic automatically as it does not exist in the metadata cache,
        and there will be no online brokers.
        
1 minute 29.899 seconds
Detail
Module: kafkatest.tests.core.security_test
Class:  SecurityTest
Method: test_client_ssl_endpoint_validation_failure
Arguments:
{
  "interbroker_security_protocol": "PLAINTEXT",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SSL"
}
        Test that invalid hostname in certificate results in connection failures.
        When security_protocol=SSL, client SSL handshakes are expected to fail due to hostname verification failure.
        When security_protocol=PLAINTEXT and interbroker_security_protocol=SSL, controller connections fail
        with hostname verification failure. Since metadata cannot be propagated in the cluster without a valid certificate,
        the broker's metadata caches will be empty. Hence we expect Metadata requests to fail with an INVALID_REPLICATION_FACTOR
        error since the broker will attempt to create the topic automatically as it does not exist in the metadata cache,
        and there will be no online brokers.
        
1 minute 32.224 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": true,
  "failure_mode": "clean_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 30.464 seconds
Detail
Module: kafkatest.tests.core.security_test
Class:  SecurityTest
Method: test_client_ssl_endpoint_validation_failure
Arguments:
{
  "interbroker_security_protocol": "PLAINTEXT",
  "metadata_quorum": "ZK",
  "security_protocol": "SSL"
}
        Test that invalid hostname in certificate results in connection failures.
        When security_protocol=SSL, client SSL handshakes are expected to fail due to hostname verification failure.
        When security_protocol=PLAINTEXT and interbroker_security_protocol=SSL, controller connections fail
        with hostname verification failure. Since metadata cannot be propagated in the cluster without a valid certificate,
        the broker's metadata caches will be empty. Hence we expect Metadata requests to fail with an INVALID_REPLICATION_FACTOR
        error since the broker will attempt to create the topic automatically as it does not exist in the metadata cache,
        and there will be no online brokers.
        
1 minute 30.635 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": false,
  "failure_mode": "hard_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
2 minutes 35.707 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": false,
  "failure_mode": "hard_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
2 minutes 40.631 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "0.10.0.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 18.088 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "0.10.0.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 21.232 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": true,
  "failure_mode": "hard_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
2 minutes 32.653 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "brokers",
  "check_order": true,
  "failure_mode": "hard_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
2 minutes 31.366 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "0.10.1.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 9.036 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "0.10.1.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 24.680 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": false,
  "failure_mode": "hard_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
3 minutes 18.194 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": false,
  "failure_mode": "hard_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
4 minutes 20.967 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "0.10.2.2",
  "to_message_format_version": "0.10.2.2"
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 11.663 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": true,
  "failure_mode": "hard_bounce",
  "use_group_metadata": false
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 29.445 seconds
Detail
Module: kafkatest.tests.core.transactions_test
Class:  TransactionsTest
Method: test_transactions
Arguments:
{
  "bounce_target": "clients",
  "check_order": true,
  "failure_mode": "hard_bounce",
  "use_group_metadata": true
}
Tests transactions by transactionally copying data from a source topic to
    a destination topic and killing the copy process as well as the broker
    randomly through the process. In the end we verify that the final output
    topic contains exactly one committed copy of each message in the input
    topic.
    
1 minute 28.046 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "0.10.2.2",
  "to_message_format_version": "0.9.0.1"
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 17.217 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "0.10.2.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 15.614 seconds
Detail
Module: kafkatest.tests.core.zookeeper_security_upgrade_test
Class:  ZooKeeperSecurityUpgradeTest
Method: test_zk_security_upgrade
Arguments:
{
  "security_protocol": "PLAINTEXT"
}
Tests a rolling upgrade for zookeeper.
    
3 minutes 12.700 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "0.10.2.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 11.731 seconds
Detail
Module: kafkatest.tests.core.zookeeper_security_upgrade_test
Class:  ZooKeeperSecurityUpgradeTest
Method: test_zk_security_upgrade
Arguments:
{
  "security_protocol": "SASL_PLAINTEXT"
}
Tests a rolling upgrade for zookeeper.
    
4 minutes 7.713 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "gzip"
  ],
  "from_kafka_version": "0.11.0.3",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 13.463 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "0.11.0.3",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 24.887 seconds
Detail
Module: kafkatest.tests.core.zookeeper_security_upgrade_test
Class:  ZooKeeperSecurityUpgradeTest
Method: test_zk_security_upgrade
Arguments:
{
  "security_protocol": "SASL_SSL"
}
Tests a rolling upgrade for zookeeper.
    
4 minutes 28.336 seconds
Detail
Module: kafkatest.tests.core.zookeeper_security_upgrade_test
Class:  ZooKeeperSecurityUpgradeTest
Method: test_zk_security_upgrade
Arguments:
{
  "security_protocol": "SSL"
}
Tests a rolling upgrade for zookeeper.
    
4 minutes 5.877 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "0.9.0.1",
  "to_message_format_version": "0.9.0.1"
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 16.096 seconds
Detail
Module: kafkatest.tests.core.zookeeper_tls_encrypt_only_test
Class:  ZookeeperTlsEncryptOnlyTest
Method: test_zk_tls_encrypt_only
Tests TLS encryption-only (ssl.clientAuth=none) connectivity to zookeeper.
    
2 minutes 33.521 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "0.9.0.1",
  "to_message_format_version": "0.9.0.1"
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 16.222 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_down_resilience_test
Class:  StreamsBrokerDownResilience
Method: test_streams_should_failover_while_brokers_down
    This test validates that Streams is resilient to a broker
    being down longer than specified timeouts in configs
    
1 minute 31.722 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "0.9.0.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 10.890 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_down_resilience_test
Class:  StreamsBrokerDownResilience
Method: test_streams_should_scale_in_while_brokers_down
    This test validates that Streams is resilient to a broker
    being down longer than specified timeouts in configs
    
1 minute 29.994 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "0.9.0.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 16.157 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "1.0.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 21.352 seconds
Detail
Module: kafkatest.tests.core.zookeeper_tls_test
Class:  ZookeeperTlsTest
Method: test_zk_tls
Tests TLS connectivity to zookeeper.
    
8 minutes 6.548 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_failure_and_recovery
Arguments:
{
  "processing_guarantee": "exactly_once"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 32.148 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "1.0.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 31.461 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "1.1.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 22.242 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_failure_and_recovery
Arguments:
{
  "processing_guarantee": "exactly_once_beta"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 48.915 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_failure_and_recovery_complex
Arguments:
{
  "processing_guarantee": "exactly_once"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 34.251 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "1.1.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 24.097 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.0.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 26.156 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_rebalance_complex
Arguments:
{
  "processing_guarantee": "exactly_once"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 4.678 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "2.0.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 32.115 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_failure_and_recovery_complex
Arguments:
{
  "processing_guarantee": "exactly_once_beta"
}
    Test of Kafka Streams exactly-once semantics
    
4 minutes 38.382 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "2.1.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 28.057 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_rebalance_complex
Arguments:
{
  "processing_guarantee": "exactly_once_beta"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 5.657 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.1.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 31.145 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_rebalance_simple
Arguments:
{
  "processing_guarantee": "exactly_once"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 13.308 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.2.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 20.424 seconds
Detail
Module: kafkatest.tests.streams.streams_eos_test
Class:  StreamsEosTest
Method: test_rebalance_simple
Arguments:
{
  "processing_guarantee": "exactly_once_beta"
}
    Test of Kafka Streams exactly-once semantics
    
3 minutes 11.577 seconds
Detail
Module: kafkatest.tests.streams.streams_optimized_test
Class:  StreamsOptimizedTest
Method: test_upgrade_optimized_topology
    Test doing upgrades of a Kafka Streams application
    that is un-optimized initially then optimized
    
2 minutes 21.631 seconds
Detail
Module: kafkatest.tests.client.compression_test
Class:  CompressionTest
Method: test_compressed_topic
Arguments:
{
  "compression_types": [
    "snappy",
    "gzip",
    "lz4",
    "zstd",
    "none"
  ],
  "metadata_quorum": "REMOTE_RAFT"
}
Test produce => consume => validate for compressed topics
        Setup: 1 zk, 1 kafka node, 1 topic with partitions=10, replication-factor=1

        compression_types parameter gives a list of compression types (or no compression if
        "none"). Each producer in a VerifiableProducer group (num_producers = number of compression
        types) will use a compression type from the list based on producer's index in the group.

            - Produce messages in the background
            - Consume messages in the background
            - Stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 43.546 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "from_kafka_version": "2.2.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 29.891 seconds
Detail
Module: kafkatest.tests.client.compression_test
Class:  CompressionTest
Method: test_compressed_topic
Arguments:
{
  "compression_types": [
    "snappy",
    "gzip",
    "lz4",
    "zstd",
    "none"
  ],
  "metadata_quorum": "ZK"
}
Test produce => consume => validate for compressed topics
        Setup: 1 zk, 1 kafka node, 1 topic with partitions=10, replication-factor=1

        compression_types parameter gives a list of compression types (or no compression if
        "none"). Each producer in a VerifiableProducer group (num_producers = number of compression
        types) will use a compression type from the list based on producer's index in the group.

            - Produce messages in the background
            - Consume messages in the background
            - Stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 41.844 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.3.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 47.878 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": false,
  "security_protocol": "SASL_PLAINTEXT"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 18.458 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": false,
  "security_protocol": "SASL_SSL"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 34.631 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": true,
  "security_protocol": "SASL_PLAINTEXT"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 9.932 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_simple_end_to_end
Arguments:
{
  "security_protocol": "SASL_PLAINTEXT"
}
        Test end-to-end behavior under non-failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce a small number of messages to the source cluster.
        - Consume messages from target.
        - Verify that number of consumed messages matches the number produced.
        
2 minutes 1.607 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": true,
  "security_protocol": "SASL_SSL"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 27.664 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_simple_end_to_end
Arguments:
{
  "security_protocol": "SASL_SSL"
}
        Test end-to-end behavior under non-failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce a small number of messages to the source cluster.
        - Consume messages from target.
        - Verify that number of consumed messages matches the number produced.
        
2 minutes 9.793 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "from_kafka_version": "2.3.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 47.869 seconds
Detail
Module: kafkatest.tests.core.produce_bench_test
Class:  ProduceBenchTest
Method: test_produce_bench
Arguments:
{
  "metadata_quorum": "ZK"
}
2 minutes 58.934 seconds
Detail
Module: kafkatest.tests.core.produce_bench_test
Class:  ProduceBenchTest
Method: test_produce_bench
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
3 minutes 13.703 seconds
Detail
Module: kafkatest.tests.core.produce_bench_test
Class:  ProduceBenchTest
Method: test_produce_bench_transactions
3 minutes 0.898 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.4.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 49.751 seconds
Detail
Module: kafkatest.tests.core.reassign_partitions_test
Class:  ReassignPartitionsTest
Method: test_reassign_partitions
Arguments:
{
  "bounce_brokers": false,
  "reassign_from_offset_zero": true
}
Reassign partitions tests.
        Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3,
        and min.insync.replicas=3

            - Produce messages in the background
            - Consume messages in the background
            - Reassign partitions
            - If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress
            - When done reassigning partitions and bouncing brokers, stop producing, and finish consuming
            - Validate that every acked message was consumed
            
2 minutes 41.808 seconds
Detail
Module: kafkatest.tests.core.reassign_partitions_test
Class:  ReassignPartitionsTest
Method: test_reassign_partitions
Arguments:
{
  "bounce_brokers": false,
  "reassign_from_offset_zero": false
}
Reassign partitions tests.
        Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3,
        and min.insync.replicas=3

            - Produce messages in the background
            - Consume messages in the background
            - Reassign partitions
            - If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress
            - When done reassigning partitions and bouncing brokers, stop producing, and finish consuming
            - Validate that every acked message was consumed
            
3 minutes 19.604 seconds
Detail
Module: kafkatest.tests.core.reassign_partitions_test
Class:  ReassignPartitionsTest
Method: test_reassign_partitions
Arguments:
{
  "bounce_brokers": true,
  "reassign_from_offset_zero": false
}
Reassign partitions tests.
        Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3,
        and min.insync.replicas=3

            - Produce messages in the background
            - Consume messages in the background
            - Reassign partitions
            - If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress
            - When done reassigning partitions and bouncing brokers, stop producing, and finish consuming
            - Validate that every acked message was consumed
            
3 minutes 42.462 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "from_kafka_version": "2.4.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 41.920 seconds
Detail
Module: kafkatest.tests.core.reassign_partitions_test
Class:  ReassignPartitionsTest
Method: test_reassign_partitions
Arguments:
{
  "bounce_brokers": true,
  "reassign_from_offset_zero": true
}
Reassign partitions tests.
        Setup: 1 zk, 4 kafka nodes, 1 topic with partitions=20, replication-factor=3,
        and min.insync.replicas=3

            - Produce messages in the background
            - Consume messages in the background
            - Reassign partitions
            - If bounce_brokers is True, also bounce a few brokers while partition re-assignment is in progress
            - When done reassigning partitions and bouncing brokers, stop producing, and finish consuming
            - Validate that every acked message was consumed
            
2 minutes 58.339 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_one
Arguments:
{
  "client_protocol": "SSL"
}
        Start with a PLAINTEXT cluster, open a SECURED port, via a rolling upgrade, ensuring we could produce
        and consume throughout over PLAINTEXT. Finally check we can produce and consume the new secured port.
        
4 minutes 8.665 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_PLAINTEXT",
  "client_protocol": "SASL_PLAINTEXT"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
4 minutes 41.321 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.5.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 42.154 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_SSL",
  "client_protocol": "SASL_PLAINTEXT"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
5 minutes 0.972 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SSL",
  "client_protocol": "SASL_PLAINTEXT"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
4 minutes 49.688 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "from_kafka_version": "2.5.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 42.793 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_PLAINTEXT",
  "client_protocol": "SASL_SSL"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
5 minutes 12.175 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_SSL",
  "client_protocol": "SASL_SSL"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
5 minutes 4.715 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "2.6.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 47.346 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SSL",
  "client_protocol": "SASL_SSL"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
4 minutes 53.405 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_PLAINTEXT",
  "client_protocol": "SSL"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
5 minutes 20.524 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.6.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 36.170 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SASL_SSL",
  "client_protocol": "SSL"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
5 minutes 16.355 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_phase_two
Arguments:
{
  "broker_protocol": "SSL",
  "client_protocol": "SSL"
}
        Start with a PLAINTEXT cluster with a second Secured port open (i.e. result of phase one).
        A third secure port is also open if inter-broker and client protocols are different.
        Start a Producer and Consumer via the SECURED client port
        Incrementally upgrade to add inter-broker be the secure broker protocol
        Incrementally upgrade again to add ACLs as well as disabling the PLAINTEXT port
        Ensure the producer and consumer ran throughout
        
4 minutes 35.852 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "0.10.0.1"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
2 minutes 37.386 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "2.6.1",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 41.816 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "0.10.1.1"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
2 minutes 31.786 seconds
Detail
Module: kafkatest.tests.core.security_rolling_upgrade_test
Class:  TestSecurityRollingUpgrade
Method: test_rolling_upgrade_sasl_mechanism_phase_two
Arguments:
{
  "new_sasl_mechanism": "PLAIN"
}
        Start with a SASL cluster with GSSAPI for inter-broker and a second mechanism for clients (i.e. result of phase one).
        Start Producer and Consumer using the second mechanism
        Incrementally upgrade to set inter-broker to the second mechanism and disable GSSAPI
        Incrementally upgrade again to add ACLs
        Ensure the producer and consumer run throughout
        
5 minutes 6.978 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "0.10.2.2"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
2 minutes 52.366 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "from_kafka_version": "2.7.0",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 14.947 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "0.11.0.3"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 1.418 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "1.0.2"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 0.489 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "1.1.1"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 0.647 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "2.7.0",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 14.139 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.0.1"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 4.617 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.1.1"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 6.287 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.2.2"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 13.042 seconds
Detail
Module: kafkatest.tests.streams.streams_relational_smoke_test
Class:  StreamsRelationalSmokeTest
Method: test_streams
Arguments:
{
  "crash": false,
  "processing_guarantee": "exactly_once"
}
    Simple test of Kafka Streams.
    
2 minutes 52.076 seconds
Detail
Module: kafkatest.tests.streams.streams_cooperative_rebalance_upgrade_test
Class:  StreamsCooperativeRebalanceUpgradeTest
Method: test_upgrade_to_cooperative_rebalance
Arguments:
{
  "upgrade_from_version": "2.3.1"
}
    Test of a rolling upgrade from eager rebalance to
    cooperative rebalance
    
3 minutes 2.305 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "2.7.0",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 18.471 seconds
Detail
Module: kafkatest.tests.streams.streams_application_upgrade_test
Class:  StreamsUpgradeTest
Method: test_app_upgrade
Arguments:
{
  "bounce_type": "full",
  "from_version": "2.2.2",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with <old_version>, and upgrades one-by-one to <new_version>
        
1 minute 38.534 seconds
Detail
Module: kafkatest.tests.streams.streams_relational_smoke_test
Class:  StreamsRelationalSmokeTest
Method: test_streams
Arguments:
{
  "crash": false,
  "processing_guarantee": "exactly_once_beta"
}
    Simple test of Kafka Streams.
    
2 minutes 52.743 seconds
Detail
Module: kafkatest.tests.streams.streams_relational_smoke_test
Class:  StreamsRelationalSmokeTest
Method: test_streams
Arguments:
{
  "crash": true,
  "processing_guarantee": "exactly_once"
}
    Simple test of Kafka Streams.
    
2 minutes 55.477 seconds
Detail
Module: kafkatest.tests.streams.streams_relational_smoke_test
Class:  StreamsRelationalSmokeTest
Method: test_streams
Arguments:
{
  "crash": true,
  "processing_guarantee": "exactly_once_beta"
}
    Simple test of Kafka Streams.
    
2 minutes 53.414 seconds
Detail
Module: kafkatest.tests.streams.streams_application_upgrade_test
Class:  StreamsUpgradeTest
Method: test_app_upgrade
Arguments:
{
  "bounce_type": "full",
  "from_version": "2.3.1",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with <old_version>, and upgrades one-by-one to <new_version>
        
1 minute 36.726 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": false,
  "metadata_quorum": "REMOTE_RAFT",
  "processing_guarantee": "at_least_once"
}
    Simple test of Kafka Streams.
    
2 minutes 34.954 seconds
Detail
Module: kafkatest.tests.streams.streams_application_upgrade_test
Class:  StreamsUpgradeTest
Method: test_app_upgrade
Arguments:
{
  "bounce_type": "full",
  "from_version": "2.4.1",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with <old_version>, and upgrades one-by-one to <new_version>
        
1 minute 41.080 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": true,
  "metadata_quorum": "REMOTE_RAFT",
  "processing_guarantee": "at_least_once"
}
    Simple test of Kafka Streams.
    
2 minutes 32.759 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": false,
  "metadata_quorum": "ZK",
  "processing_guarantee": "at_least_once"
}
    Simple test of Kafka Streams.
    
2 minutes 40.890 seconds
Detail
Module: kafkatest.tests.streams.streams_application_upgrade_test
Class:  StreamsUpgradeTest
Method: test_app_upgrade
Arguments:
{
  "bounce_type": "full",
  "from_version": "2.5.1",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with <old_version>, and upgrades one-by-one to <new_version>
        
1 minute 42.225 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": true,
  "metadata_quorum": "ZK",
  "processing_guarantee": "at_least_once"
}
    Simple test of Kafka Streams.
    
2 minutes 40.020 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": false,
  "processing_guarantee": "exactly_once"
}
    Simple test of Kafka Streams.
    
2 minutes 44.359 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": true,
  "processing_guarantee": "exactly_once"
}
    Simple test of Kafka Streams.
    
2 minutes 44.073 seconds
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_metadata_upgrade
Arguments:
{
  "from_version": "0.10.0.1",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with version <from_version> and upgrades one-by-one to <to_version>
        
2 minutes 1.705 seconds
Detail
Module: kafkatest.tests.streams.streams_static_membership_test
Class:  StreamsStaticMembershipTest
Method: test_rolling_bounces_will_not_trigger_rebalance_under_static_membership
    Tests using static membership when broker points to minimum supported
    version (2.3) or higher.
    
1 minute 53.214 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": false,
  "processing_guarantee": "exactly_once_beta"
}
    Simple test of Kafka Streams.
    
2 minutes 41.680 seconds
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_metadata_upgrade
Arguments:
{
  "from_version": "0.10.1.1",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with version <from_version> and upgrades one-by-one to <to_version>
        
1 minute 53.072 seconds
Detail
Module: kafkatest.tests.streams.streams_smoke_test
Class:  StreamsSmokeTest
Method: test_streams
Arguments:
{
  "crash": true,
  "processing_guarantee": "exactly_once_beta"
}
    Simple test of Kafka Streams.
    
2 minutes 46.405 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "0.10.0.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
53.772 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "num_producers": 3,
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 14.410 seconds
{
  "mb_per_sec": 62.089999999999996,
  "records_per_sec": 651037.087593
}
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "0.10.1.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
53.709 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "0.10.2.2"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
53.058 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "0.11.0.3"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
58.059 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "1.0.2"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
59.112 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "1.1.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
56.628 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.0.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 1.711 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.1.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 2.252 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.2.2"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 4.124 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.3.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 2.217 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.4.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 2.748 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.5.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 3.716 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.6.1"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 4.307 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "2.7.0"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 8.262 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "dev",
  "metadata_quorum": "REMOTE_RAFT"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 9.238 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": false,
  "enable_autocommit": false,
  "metadata_quorum": "REMOTE_RAFT"
}
47.110 seconds
Detail
Module: kafkatest.tests.client.client_compatibility_features_test
Class:  ClientCompatibilityFeaturesTest
Method: run_compatibility_test
Arguments:
{
  "broker_version": "dev",
  "metadata_quorum": "ZK"
}
    Tests clients for the presence or absence of specific features when communicating with brokers with various
    versions. Relies on ClientCompatibilityTest.java for much of the functionality.
    
1 minute 2.493 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": false,
  "enable_autocommit": true,
  "metadata_quorum": "REMOTE_RAFT"
}
44.778 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": false,
  "enable_autocommit": false,
  "metadata_quorum": "ZK"
}
1 minute 10.177 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": false,
  "metadata_quorum": "REMOTE_RAFT"
}
41.104 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": false,
  "enable_autocommit": true,
  "metadata_quorum": "ZK"
}
1 minute 8.804 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": false,
  "metadata_quorum": "ZK"
}
53.177 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": true,
  "metadata_quorum": "REMOTE_RAFT"
}
43.661 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": true,
  "metadata_quorum": "ZK"
}
52.672 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_bounce
Arguments:
{
  "bounce_mode": "all",
  "clean_shutdown": true,
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify correct consumer behavior when the consumers in the group are consecutively restarted.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each consumer, waiting for each one to rejoin the group before
          restarting the rest.
        - Verify delivery semantics according to the failure type.
        
1 minute 23.425 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_bounce
Arguments:
{
  "bounce_mode": "all",
  "clean_shutdown": true,
  "metadata_quorum": "ZK"
}
        Verify correct consumer behavior when the consumers in the group are consecutively restarted.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each consumer, waiting for each one to rejoin the group before
          restarting the rest.
        - Verify delivery semantics according to the failure type.
        
1 minute 31.386 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_rolling_bounce
Arguments:
{
  "metadata_quorum": "ZK"
}
        Verify correct consumer behavior when the brokers are consecutively restarted.

        Setup: single Kafka cluster with one producer writing messages to a single topic with one
        partition, an a set of consumers in the same group reading from the same topic.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each broker consecutively, waiting for the group to stabilize between
          each broker restart.
        - Verify delivery semantics according to the failure type and that the broker bounces
          did not cause unexpected group rebalances.
        
2 minutes 20.892 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_broker_rolling_bounce
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify correct consumer behavior when the brokers are consecutively restarted.

        Setup: single Kafka cluster with one producer writing messages to a single topic with one
        partition, an a set of consumers in the same group reading from the same topic.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each broker consecutively, waiting for the group to stabilize between
          each broker restart.
        - Verify delivery semantics according to the failure type and that the broker bounces
          did not cause unexpected group rebalances.
        
2 minutes 47.456 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": false,
  "metadata_quorum": "REMOTE_RAFT"
}
46.472 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_bounce
Arguments:
{
  "bounce_mode": "rolling",
  "clean_shutdown": true,
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify correct consumer behavior when the consumers in the group are consecutively restarted.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each consumer, waiting for each one to rejoin the group before
          restarting the rest.
        - Verify delivery semantics according to the failure type.
        
1 minute 44.196 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_bounce
Arguments:
{
  "bounce_mode": "rolling",
  "clean_shutdown": true,
  "metadata_quorum": "ZK"
}
        Verify correct consumer behavior when the consumers in the group are consecutively restarted.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each consumer, waiting for each one to rejoin the group before
          restarting the rest.
        - Verify delivery semantics according to the failure type.
        
1 minute 45.437 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": false,
  "metadata_quorum": "ZK"
}
57.249 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": true,
  "metadata_quorum": "REMOTE_RAFT"
}
46.449 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_consumer_failure
Arguments:
{
  "clean_shutdown": true,
  "enable_autocommit": true,
  "metadata_quorum": "ZK"
}
59.417 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_group_consumption
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Verifies correct group rebalance behavior as consumers are started and stopped.
        In particular, this test verifies that the partition is readable after every
        expected rebalance.

        Setup: single Kafka cluster with a group of consumers reading from one topic
        with one partition while the verifiable producer writes to it.

        - Start the consumers one by one, verifying consumption after each rebalance
        - Shutdown the consumers one by one, verifying consumption after each rebalance
        
50.878 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_group_consumption
Arguments:
{
  "metadata_quorum": "ZK"
}
        Verifies correct group rebalance behavior as consumers are started and stopped.
        In particular, this test verifies that the partition is readable after every
        expected rebalance.

        Setup: single Kafka cluster with a group of consumers reading from one topic
        with one partition while the verifiable producer writes to it.

        - Start the consumers one by one, verifying consumption after each rebalance
        - Shutdown the consumers one by one, verifying consumption after each rebalance
        
1 minute 6.810 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "all",
  "clean_shutdown": true,
  "metadata_quorum": "REMOTE_RAFT",
  "num_bounces": 5,
  "static_membership": false
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 17.518 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "all",
  "clean_shutdown": true,
  "metadata_quorum": "ZK",
  "num_bounces": 5,
  "static_membership": false
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 29.895 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "rolling",
  "clean_shutdown": true,
  "metadata_quorum": "REMOTE_RAFT",
  "num_bounces": 5,
  "static_membership": false
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 22.258 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "rolling",
  "clean_shutdown": true,
  "metadata_quorum": "ZK",
  "num_bounces": 5,
  "static_membership": false
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 27.722 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "all",
  "clean_shutdown": true,
  "metadata_quorum": "REMOTE_RAFT",
  "num_bounces": 5,
  "static_membership": true
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 2.775 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "all",
  "clean_shutdown": true,
  "metadata_quorum": "ZK",
  "num_bounces": 5,
  "static_membership": true
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 14.879 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "rolling",
  "clean_shutdown": true,
  "metadata_quorum": "REMOTE_RAFT",
  "num_bounces": 5,
  "static_membership": true
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 13.930 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_persisted_after_rejoin
Arguments:
{
  "bounce_mode": "all",
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify that the updated member.id(updated_member_id) caused by static member rejoin would be persisted. If not,
        after the brokers rolling bounce, the migrated group coordinator would load the stale persisted member.id and
        fence subsequent static member rejoin with updated_member_id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up a static consumer and wait until it's up
        - Restart the consumer and wait until it up, its member.id is supposed to be updated and persisted.
        - Rolling bounce all the brokers and verify that the static consumer can still join the group and consumer messages.
        
1 minute 14.269 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_bounce
Arguments:
{
  "bounce_mode": "rolling",
  "clean_shutdown": true,
  "metadata_quorum": "ZK",
  "num_bounces": 5,
  "static_membership": true
}
        Verify correct static consumer behavior when the consumers in the group are restarted. In order to make
        sure the behavior of static members are different from dynamic ones, we take both static and dynamic
        membership into this test suite.

        Setup: single Kafka cluster with one producer and a set of consumers in one group.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers as static/dynamic members and wait until they've joined the group.
        - In a loop, restart each consumer except the first member (note: may not be the leader), and expect no rebalance triggered
          during this process if the group is in static membership.
        
1 minute 23.178 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_persisted_after_rejoin
Arguments:
{
  "bounce_mode": "all",
  "metadata_quorum": "ZK"
}
        Verify that the updated member.id(updated_member_id) caused by static member rejoin would be persisted. If not,
        after the brokers rolling bounce, the migrated group coordinator would load the stale persisted member.id and
        fence subsequent static member rejoin with updated_member_id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up a static consumer and wait until it's up
        - Restart the consumer and wait until it up, its member.id is supposed to be updated and persisted.
        - Rolling bounce all the brokers and verify that the static consumer can still join the group and consumer messages.
        
1 minute 11.858 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_persisted_after_rejoin
Arguments:
{
  "bounce_mode": "rolling",
  "metadata_quorum": "REMOTE_RAFT"
}
        Verify that the updated member.id(updated_member_id) caused by static member rejoin would be persisted. If not,
        after the brokers rolling bounce, the migrated group coordinator would load the stale persisted member.id and
        fence subsequent static member rejoin with updated_member_id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up a static consumer and wait until it's up
        - Restart the consumer and wait until it up, its member.id is supposed to be updated and persisted.
        - Rolling bounce all the brokers and verify that the static consumer can still join the group and consumer messages.
        
1 minute 26.756 seconds
Detail
Module: kafkatest.tests.client.consumer_test
Class:  OffsetValidationTest
Method: test_static_consumer_persisted_after_rejoin
Arguments:
{
  "bounce_mode": "rolling",
  "metadata_quorum": "ZK"
}
        Verify that the updated member.id(updated_member_id) caused by static member rejoin would be persisted. If not,
        after the brokers rolling bounce, the migrated group coordinator would load the stale persisted member.id and
        fence subsequent static member rejoin with updated_member_id.

        - Start a producer which continues producing new messages throughout the test.
        - Start up a static consumer and wait until it's up
        - Restart the consumer and wait until it up, its member.id is supposed to be updated and persisted.
        - Rolling bounce all the brokers and verify that the static consumer can still join the group and consumer messages.
        
1 minute 18.602 seconds
Detail
Module: kafkatest.tests.client.truncation_test
Class:  TruncationTest
Method: test_offset_truncate
        Verify correct consumer behavior when the brokers are consecutively restarted.

        Setup: single Kafka cluster with one producer writing messages to a single topic with one
        partition, an a set of consumers in the same group reading from the same topic.

        - Start a producer which continues producing new messages throughout the test.
        - Start up the consumers and wait until they've joined the group.
        - In a loop, restart each broker consecutively, waiting for the group to stabilize between
          each broker restart.
        - Verify delivery semantics according to the failure type and that the broker bounces
          did not cause unexpected group rebalances.
        
1 minute 47.367 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "version": "1.1.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 13.711 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "security_protocol": "SASL_SSL",
  "version": "1.1.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 43.004 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "version": "2.0.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 24.136 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "security_protocol": "SASL_SSL",
  "version": "2.0.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 49.902 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "version": "2.1.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 20.803 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "lz4"
  ],
  "security_protocol": "SASL_SSL",
  "version": "2.1.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 44.310 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "version": "2.2.2"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 18.832 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "security_protocol": "SASL_SSL",
  "version": "2.2.2"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
2 minutes 51.266 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "version": "2.3.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 8.870 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "static_membership": true,
  "version": "2.4.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 13.818 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "security_protocol": "SASL_SSL",
  "version": "2.3.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 41.185 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "security_protocol": "SASL_SSL",
  "static_membership": true,
  "version": "2.4.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 51.343 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "static_membership": false,
  "version": "2.5.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 22.436 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "static_membership": true,
  "version": "2.5.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 11.602 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": false,
  "security_protocol": "PLAINTEXT"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 9.869 seconds
Detail
Module: kafkatest.tests.core.downgrade_test
Class:  TestDowngrade
Method: test_upgrade_and_downgrade
Arguments:
{
  "compression_types": [
    "zstd"
  ],
  "security_protocol": "SASL_SSL",
  "version": "2.5.1"
}
Test upgrade and downgrade of Kafka cluster from old versions to the current version

        `version` is the Kafka version to upgrade from and downgrade back to

        Downgrades are supported to any version which is at or above the current 
        `inter.broker.protocol.version` (IBP). For example, if a user upgrades from 1.1 to 2.3, 
        but they leave the IBP set to 1.1, then downgrading to any version at 1.1 or higher is 
        supported.

        This test case verifies that producers and consumers continue working during
        the course of an upgrade and downgrade.

        - Start 3 node broker cluster on version 'kafka_version'
        - Start producer and consumer in the background
        - Roll the cluster to upgrade to the current version with IBP set to 'kafka_version'
        - Roll the cluster to downgrade back to 'kafka_version'
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 57.152 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": true,
  "security_protocol": "PLAINTEXT"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
1 minute 56.509 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": false,
  "security_protocol": "SSL"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 22.484 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_simple_end_to_end
Arguments:
{
  "security_protocol": "PLAINTEXT"
}
        Test end-to-end behavior under non-failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce a small number of messages to the source cluster.
        - Consume messages from target.
        - Verify that number of consumed messages matches the number produced.
        
1 minute 46.562 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_bounce
Arguments:
{
  "clean_shutdown": true,
  "security_protocol": "SSL"
}
        Test end-to-end behavior under failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce to source cluster, and consume from target cluster in the background.
        - Bounce MM process
        - Verify every message acknowledged by the source producer is consumed by the target consumer
        
2 minutes 16.406 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_bounce",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 46.207 seconds
Detail
Module: kafkatest.tests.core.mirror_maker_test
Class:  TestMirrorMakerService
Method: test_simple_end_to_end
Arguments:
{
  "security_protocol": "SSL"
}
        Test end-to-end behavior under non-failure conditions.

        Setup: two single node Kafka clusters, each connected to its own single node zookeeper cluster.
        One is source, and the other is target. Single-node mirror maker mirrors from source to target.

        - Start mirror maker.
        - Produce a small number of messages to the source cluster.
        - Consume messages from target.
        - Verify that number of consumed messages matches the number produced.
        
2 minutes 3.228 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "enable_idempotence": true,
  "failure_mode": "clean_bounce",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 52.104 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_bounce",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 32.498 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 52.949 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 16.153 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 40.892 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 4.093 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 22.295 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 10.853 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_shutdown",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 0.395 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_shutdown",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 25.839 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 1.829 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "enable_idempotence": true,
  "failure_mode": "clean_shutdown",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 3.806 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 15.876 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
57.696 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 4.708 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
55.755 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 23.143 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 30.720 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 4.545 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
56.200 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "clean_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 3.184 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_bounce",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
2 minutes 59.230 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "enable_idempotence": true,
  "failure_mode": "hard_bounce",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 18.774 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_bounce",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 40.045 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 44.746 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 15.673 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "client_sasl_mechanism": "PLAIN",
  "failure_mode": "hard_bounce",
  "interbroker_sasl_mechanism": "GSSAPI",
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 52.093 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "client_sasl_mechanism": "PLAIN",
  "failure_mode": "hard_bounce",
  "interbroker_sasl_mechanism": "GSSAPI",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 35.271 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "client_sasl_mechanism": "PLAIN",
  "failure_mode": "hard_bounce",
  "interbroker_sasl_mechanism": "PLAIN",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 27.921 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "client_sasl_mechanism": "PLAIN",
  "failure_mode": "hard_bounce",
  "interbroker_sasl_mechanism": "PLAIN",
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 47.225 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "client_sasl_mechanism": "SCRAM-SHA-256",
  "failure_mode": "hard_bounce",
  "interbroker_sasl_mechanism": "SCRAM-SHA-512",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 44.137 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 57.287 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
4 minutes 32.482 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 42.693 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_shutdown",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 15.813 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 13.842 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 15.833 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_shutdown",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 41.258 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_bounce",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
3 minutes 46.332 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "enable_idempotence": true,
  "failure_mode": "hard_shutdown",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 22.860 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 2.840 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 22.870 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 31.917 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 2.730 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 46.624 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 1.317 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.2"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 26.221 seconds
Detail
Module: kafkatest.tests.core.replication_test
Class:  ReplicationTest
Method: test_replication_with_broker_failure
Arguments:
{
  "broker_type": "leader",
  "compression_type": "gzip",
  "failure_mode": "hard_shutdown",
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT",
  "tls_version": "TLSv1.3"
}
Replication tests.
        These tests verify that replication provides simple durability guarantees by checking that data acked by
        brokers is still available for consumption in the face of various failure scenarios.

        Setup: 1 zk/Raft-based controller, 3 kafka nodes, 1 topic with partitions=3, replication-factor=3, and min.insync.replicas=2

            - Produce messages in the background
            - Consume messages in the background
            - Drive broker failures (shutdown, or bounce repeatedly with kill -15 or kill -9)
            - When done driving failures, stop producing, and finish consuming
            - Validate that every acked message was consumed
        
1 minute 20.647 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "0.8.2.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 1.165 seconds
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "snappy"
  ],
  "from_kafka_version": "0.8.2.2",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
3 minutes 18.777 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_all_brokers_bounce
Arguments:
{
  "failure_mode": "clean_bounce",
  "num_failures": 3
}
        Start a smoke test client, then kill a few brokers and ensure data is still received
        Record if records are delivered
        
3 minutes 40.277 seconds
{
  "Client closed": "0c76dbe7-4f34-4ee3-9886-bcfd2bf16e1e: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.core.upgrade_test
Class:  TestUpgrade
Method: test_upgrade
Arguments:
{
  "compression_types": [
    "none"
  ],
  "from_kafka_version": "0.9.0.1",
  "security_protocol": "SASL_SSL",
  "to_message_format_version": null
}
Test upgrade of Kafka broker cluster from various versions to the current version

        from_kafka_version is a Kafka version to upgrade from

        If to_message_format_version is None, it means that we will upgrade to default (latest)
        message format version. It is possible to upgrade to 0.10 brokers but still use message
        format version 0.9

        - Start 3 node broker cluster on version 'from_kafka_version'
        - Start producer and consumer in the background
        - Perform two-phase rolling upgrade
            - First phase: upgrade brokers to 0.10 with inter.broker.protocol.version set to
            from_kafka_version and log.message.format.version set to from_kafka_version
            - Second phase: remove inter.broker.protocol.version config with rolling bounce; if
            to_message_format_version is set to 0.9, set log.message.format.version to
            to_message_format_version, otherwise remove log.message.format.version config
        - Finally, validate that every message acked by the producer was consumed by the consumer
        
4 minutes 7.459 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_all_brokers_bounce
Arguments:
{
  "failure_mode": "hard_bounce",
  "num_failures": 3
}
        Start a smoke test client, then kill a few brokers and ensure data is still received
        Record if records are delivered
        
3 minutes 36.630 seconds
{
  "Client closed": "9be913c3-20ed-4aa5-b57a-68d5398da143: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_bounce",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
4 minutes 33.837 seconds
{
  "Client closed": "e404e52b-de84-4f4e-8e9f-41113775c658: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_bounce",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
4 minutes 29.938 seconds
{
  "Client closed": "c0667bfe-0f40-4b89-8848-e70bca183997: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_bounce",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
4 minutes 40.547 seconds
{
  "Client closed": "4752d916-13bd-4683-a52c-002fd891f147: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_bounce",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
4 minutes 38.801 seconds
{
  "Client closed": "dae18d5f-1505-42b8-b3ea-81143ba4784b: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_shutdown",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 43.520 seconds
{
  "Client closed": "2920759c-aa61-40c5-9134-7e10ff03125b: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "clean_shutdown",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 46.107 seconds
{
  "Client closed": "9ac0a189-c29a-47b5-863b-93dd2136f879: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_shutdown",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 41.952 seconds
{
  "Client closed": "54b70d5b-d0f1-4973-9070-39fb9b2cbd68: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "clean_shutdown",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 44.852 seconds
{
  "Client closed": "8c25dc81-628d-449c-b168-ba59c66e9644: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_bounce",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
5 minutes 49.580 seconds
{
  "Client closed": "5765701a-5003-449b-a661-b78a9b989d7c: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_bounce",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
5 minutes 50.455 seconds
{
  "Client closed": "986efb9e-588b-4e76-b27f-ff70aa54385a: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_bounce",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
5 minutes 55.768 seconds
{
  "Client closed": "0bd666ea-33bb-4e6a-96a6-8baa35a1dac6: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_bounce",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
5 minutes 58.895 seconds
{
  "Client closed": "89c80773-b719-449d-957f-836b9145ed07: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_shutdown",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 54.690 seconds
{
  "Client closed": "f16815ae-ed12-4c62-b56c-dee4f617f05c: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_shutdown",
  "num_threads": 1,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 28.832 seconds
{
  "Client closed": "8adfa45f-e719-4cc0-b262-92a8df3e9fc5: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "controller",
  "failure_mode": "hard_shutdown",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 58.157 seconds
{
  "Client closed": "89ab0d24-8790-4d41-a2f2-63357d4d99b8: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_broker_type_bounce
Arguments:
{
  "broker_type": "leader",
  "failure_mode": "hard_shutdown",
  "num_threads": 3,
  "sleep_time_secs": 120
}
        Start a smoke test client, then kill one particular broker and ensure data is still received
        Record if records are delivered.
        We also add a single thread stream client to make sure we could get all partitions reassigned in
        next generation so to verify the partition lost is correctly triggered.
        
3 minutes 58.664 seconds
{
  "Client closed": "6d48d808-9357-405d-875a-1d8c2e93387b: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_many_brokers_bounce
Arguments:
{
  "failure_mode": "clean_bounce",
  "num_failures": 2
}
        Start a smoke test client, then kill a few brokers and ensure data is still received
        Record if records are delivered
        
3 minutes 57.835 seconds
{
  "Client closed": "3e237ed7-818d-4d76-a1e1-338b75d57c13: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_many_brokers_bounce
Arguments:
{
  "failure_mode": "clean_shutdown",
  "num_failures": 2
}
        Start a smoke test client, then kill a few brokers and ensure data is still received
        Record if records are delivered
        
3 minutes 42.612 seconds
{
  "Client closed": "0eb09839-1f16-4ff4-b340-192a371b4be3: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_many_brokers_bounce
Arguments:
{
  "failure_mode": "hard_bounce",
  "num_failures": 2
}
        Start a smoke test client, then kill a few brokers and ensure data is still received
        Record if records are delivered
        
3 minutes 57.116 seconds
{
  "Client closed": "30fa10d7-e455-45ea-a415-a5ae7d468b6e: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "security_protocol": "PLAINTEXT"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
23.894 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_down_resilience_test
Class:  StreamsBrokerDownResilience
Method: test_streams_runs_with_broker_down_initially
    This test validates that Streams is resilient to a broker
    being down longer than specified timeouts in configs
    
1 minute 15.337 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
28.429 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "0.11.0.3"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
29.120 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_down_resilience_test
Class:  StreamsBrokerDownResilience
Method: test_streams_resilient_to_broker_down
    This test validates that Streams is resilient to a broker
    being down longer than specified timeouts in configs
    
2 minutes 48.080 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "1.0.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.627 seconds
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_metadata_upgrade
Arguments:
{
  "from_version": "0.10.2.2",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with version <from_version> and upgrades one-by-one to <to_version>
        
2 minutes 14.531 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_bounce_test
Class:  StreamsBrokerBounceTest
Method: test_many_brokers_bounce
Arguments:
{
  "failure_mode": "hard_shutdown",
  "num_failures": 2
}
        Start a smoke test client, then kill a few brokers and ensure data is still received
        Record if records are delivered
        
3 minutes 57.065 seconds
{
  "Client closed": "1a9d818e-52f4-4246-8bca-7a2343aa37d0: SMOKE-TEST-CLIENT-CLOSED\n",
  "Logic Success/Failure": "SUCCESS\n",
  "Records Delivered": "ALL-RECORDS-DELIVERED\n"
}
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_metadata_upgrade
Arguments:
{
  "from_version": "0.11.0.3",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with version <from_version> and upgrades one-by-one to <to_version>
        
2 minutes 7.973 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 43.361 seconds
{
  "latency_50th_ms": 2.0,
  "latency_999th_ms": 30.0,
  "latency_99th_ms": 19.0
}
Detail
Module: kafkatest.tests.tools.replica_verification_test
Class:  ReplicaVerificationToolTest
Method: test_replica_lags
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
        Tests ReplicaVerificationTool
        :return: None
        
45.034 seconds
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_metadata_upgrade
Arguments:
{
  "from_version": "1.0.2",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with version <from_version> and upgrades one-by-one to <to_version>
        
2 minutes 14.350 seconds
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_metadata_upgrade
Arguments:
{
  "from_version": "1.1.1",
  "to_version": "6.2.0-0"
}
        Starts 3 KafkaStreams instances with version <from_version> and upgrades one-by-one to <to_version>
        
2 minutes 8.477 seconds
Detail
Module: kafkatest.tests.tools.replica_verification_test
Class:  ReplicaVerificationToolTest
Method: test_replica_lags
Arguments:
{
  "metadata_quorum": "ZK"
}
        Tests ReplicaVerificationTool
        :return: None
        
55.427 seconds
Detail
Module: kafkatest.tests.streams.streams_upgrade_test
Class:  StreamsUpgradeTest
Method: test_version_probing_upgrade
        Starts 3 KafkaStreams instances, and upgrades one-by-one to "future version"
        
1 minute 35.294 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "security_protocol": "SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
27.171 seconds
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 38.978 seconds
{
  "latency_50th_ms": 2.0,
  "latency_999th_ms": 19.0,
  "latency_99th_ms": 9.0
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 35.140 seconds
{
  "latency_50th_ms": 2.0,
  "latency_999th_ms": 18.0,
  "latency_99th_ms": 8.0
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_long_term_producer_throughput
Arguments:
{
  "compression_type": "none",
  "security_protocol": "PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce 10e6 100 byte messages to a topic with 6 partitions, replication-factor 3, and acks=1.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.

        (This runs ProducerPerformance.java under the hood)
        
1 minute 22.419 seconds
{
  "0": {
    "mb_per_sec": 47.19,
    "records_per_sec": 494780.070259
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_end_to_end_latency
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce (acks = 1) and consume 10e3 messages to a topic with 6 partitions and replication-factor 3,
        measuring the latency between production and consumption of each message.

        Return aggregate latency statistics.

        (Under the hood, this simply runs EndToEndLatency.scala)
        
1 minute 38.608 seconds
{
  "latency_50th_ms": 2.0,
  "latency_999th_ms": 17.0,
  "latency_99th_ms": 8.0
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_long_term_producer_throughput
Arguments:
{
  "compression_type": "snappy",
  "security_protocol": "PLAINTEXT"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce 10e6 100 byte messages to a topic with 6 partitions, replication-factor 3, and acks=1.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.

        (This runs ProducerPerformance.java under the hood)
        
1 minute 17.131 seconds
{
  "0": {
    "mb_per_sec": 60.31,
    "records_per_sec": 632351.08132
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_long_term_producer_throughput
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce 10e6 100 byte messages to a topic with 6 partitions, replication-factor 3, and acks=1.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.

        (This runs ProducerPerformance.java under the hood)
        
1 minute 45.361 seconds
{
  "0": {
    "mb_per_sec": 29.52,
    "records_per_sec": 309530.442319
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_long_term_producer_throughput
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce 10e6 100 byte messages to a topic with 6 partitions, replication-factor 3, and acks=1.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.

        (This runs ProducerPerformance.java under the hood)
        
1 minute 22.535 seconds
{
  "0": {
    "mb_per_sec": 77.73,
    "records_per_sec": 815062.35227
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "topic": "topic-replication-factor-one"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 3.488 seconds
{
  "mb_per_sec": 34.71,
  "records_per_sec": 363930.856833
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": -1,
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 9.453 seconds
{
  "mb_per_sec": 14.33,
  "records_per_sec": 150232.482651
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_long_term_producer_throughput
Arguments:
{
  "compression_type": "snappy",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce 10e6 100 byte messages to a topic with 6 partitions, replication-factor 3, and acks=1.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.

        (This runs ProducerPerformance.java under the hood)
        
1 minute 29.785 seconds
{
  "0": {
    "mb_per_sec": 57.84,
    "records_per_sec": 606501.698205
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_long_term_producer_throughput
Arguments:
{
  "compression_type": "none",
  "interbroker_security_protocol": "PLAINTEXT",
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce 10e6 100 byte messages to a topic with 6 partitions, replication-factor 3, and acks=1.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.

        (This runs ProducerPerformance.java under the hood)
        
1 minute 44.718 seconds
{
  "0": {
    "mb_per_sec": 29.81,
    "records_per_sec": 312548.835756
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 6.175 seconds
{
  "mb_per_sec": 28.34,
  "records_per_sec": 297204.827281
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 10,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 16.096 seconds
{
  "mb_per_sec": 8.36,
  "records_per_sec": 876151.968144
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 10,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 26.309 seconds
{
  "mb_per_sec": 8.81,
  "records_per_sec": 923728.286304
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 10,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 27.478 seconds
{
  "mb_per_sec": 8.81,
  "records_per_sec": 923855.451542
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 10,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 24.386 seconds
{
  "mb_per_sec": 6.97,
  "records_per_sec": 730675.159236
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 10,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 29.123 seconds
{
  "mb_per_sec": 7.83,
  "records_per_sec": 821255.093924
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 10,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 30.951 seconds
{
  "mb_per_sec": 7.85,
  "records_per_sec": 823522.640815
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 100,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 7.352 seconds
{
  "mb_per_sec": 28.01,
  "records_per_sec": 293757.277304
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 100,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 19.068 seconds
{
  "mb_per_sec": 17.1,
  "records_per_sec": 179291.611007
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 100,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 5.939 seconds
{
  "mb_per_sec": 38.69,
  "records_per_sec": 405736.698912
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 100,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 21.343 seconds
{
  "mb_per_sec": 17.7,
  "records_per_sec": 185639.972337
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 100,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 17.348 seconds
{
  "mb_per_sec": 38.74,
  "records_per_sec": 406227.905569
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 1000,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 7.765 seconds
{
  "mb_per_sec": 30.69,
  "records_per_sec": 32178.614241
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 100,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 19.306 seconds
{
  "mb_per_sec": 38.13,
  "records_per_sec": 399814.417635
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 1000,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 5.503 seconds
{
  "mb_per_sec": 63.15,
  "records_per_sec": 66214.602861
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 1000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 22.623 seconds
{
  "mb_per_sec": 20.4,
  "records_per_sec": 21392.572522
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 1000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 21.430 seconds
{
  "mb_per_sec": 19.65,
  "records_per_sec": 20601.227936
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 1000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 19.362 seconds
{
  "mb_per_sec": 34.74,
  "records_per_sec": 36432.410423
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 10000,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 6.668 seconds
{
  "mb_per_sec": 35.19,
  "records_per_sec": 3690.129227
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 1000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 13.191 seconds
{
  "mb_per_sec": 42.27,
  "records_per_sec": 44325.297226
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 10000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 17.756 seconds
{
  "mb_per_sec": 22.27,
  "records_per_sec": 2335.305377
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 10000,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 8.940 seconds
{
  "mb_per_sec": 28.76,
  "records_per_sec": 3015.277466
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 10000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 13.849 seconds
{
  "mb_per_sec": 21.81,
  "records_per_sec": 2287.150648
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 100000,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 4.651 seconds
{
  "mb_per_sec": 70.55,
  "records_per_sec": 739.801544
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 10000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 21.582 seconds
{
  "mb_per_sec": 21.37,
  "records_per_sec": 2240.941726
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 10000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 21.630 seconds
{
  "mb_per_sec": 21.2,
  "records_per_sec": 2223.492379
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 100000,
  "security_protocol": "PLAINTEXT",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 5.274 seconds
{
  "mb_per_sec": 71.5,
  "records_per_sec": 749.72067
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 100000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 15.801 seconds
{
  "mb_per_sec": 37.43,
  "records_per_sec": 392.512431
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "none",
  "message_size": 100000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 20.523 seconds
{
  "mb_per_sec": 37.01,
  "records_per_sec": 388.085599
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "new_consumer": false,
  "version": "0.8.2.2"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
45.518 seconds
{
  "consumer_performance": {
    "mb_per_sec": 317.8914,
    "records_per_sec": 3333333.3333
  },
  "end_to_end_latency": {
    "latency_50th_ms": 0.0,
    "latency_999th_ms": 7.0,
    "latency_99th_ms": 1.0
  },
  "producer_performance": {
    "mb_per_sec": 2.29,
    "records_per_sec": 24038.461538
  }
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 100000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.2",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 14.768 seconds
{
  "mb_per_sec": 37.58,
  "records_per_sec": 394.01057
}
Detail
Module: kafkatest.benchmarks.core.benchmark_test
Class:  Benchmark
Method: test_producer_throughput
Arguments:
{
  "acks": 1,
  "compression_type": "snappy",
  "message_size": 100000,
  "security_protocol": "SSL",
  "tls_version": "TLSv1.3",
  "topic": "topic-replication-factor-three"
}
        Setup: 1 node zk + 3 node kafka cluster
        Produce ~128MB worth of messages to a topic with 6 partitions. Required acks, topic replication factor,
        security protocol and message size are varied depending on arguments injected into this test.

        Collect and return aggregate throughput statistics after all messages have been acknowledged.
        (This runs ProducerPerformance.java under the hood)
        
1 minute 14.546 seconds
{
  "mb_per_sec": 35.83,
  "records_per_sec": 375.699888
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "version": "0.9.0.1"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
41.928 seconds
{
  "consumer_performance": {
    "mb_per_sec": 5.0835,
    "records_per_sec": 88105.7269
  },
  "end_to_end_latency": {
    "latency_50th_ms": 1.0,
    "latency_999th_ms": 8.0,
    "latency_99th_ms": 3.0
  },
  "producer_performance": {
    "mb_per_sec": 1.97,
    "records_per_sec": 20661.157025
  }
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "new_consumer": false,
  "version": "0.9.0.1"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
45.068 seconds
{
  "consumer_performance": {
    "mb_per_sec": 317.8914,
    "records_per_sec": 3333333.3333
  },
  "end_to_end_latency": {
    "latency_50th_ms": 1.0,
    "latency_999th_ms": 8.0,
    "latency_99th_ms": 4.0
  },
  "producer_performance": {
    "mb_per_sec": 1.92,
    "records_per_sec": 20080.321285
  }
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "version": "dev"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
48.129 seconds
{
  "consumer_performance": {
    "mb_per_sec": 1.2233,
    "records_per_sec": 12853.8462
  },
  "end_to_end_latency": {
    "latency_50th_ms": 1.0,
    "latency_999th_ms": 9.0,
    "latency_99th_ms": 4.0
  },
  "producer_performance": {
    "mb_per_sec": 1.17,
    "records_per_sec": 12224.938875
  }
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "new_consumer": false,
  "version": "1.1.1"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
49.695 seconds
{
  "consumer_performance": {
    "mb_per_sec": 476.8372,
    "records_per_sec": 5000000.0
  },
  "end_to_end_latency": {
    "latency_50th_ms": 1.0,
    "latency_999th_ms": 11.0,
    "latency_99th_ms": 5.0
  },
  "producer_performance": {
    "mb_per_sec": 1.75,
    "records_per_sec": 18382.352941
  }
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "version": "dev"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
53.030 seconds
{
  "consumer_performance": {
    "mb_per_sec": 1.211,
    "records_per_sec": 12727.1574
  },
  "end_to_end_latency": {
    "latency_50th_ms": 1.0,
    "latency_999th_ms": 10.0,
    "latency_99th_ms": 4.0
  },
  "producer_performance": {
    "mb_per_sec": 1.17,
    "records_per_sec": 12285.012285
  }
}
Detail
Module: kafkatest.sanity_checks.test_performance_services
Class:  PerformanceServiceTest
Method: test_version
Arguments:
{
  "metadata_quorum": "ZK",
  "version": "dev"
}
        Sanity check out producer performance service - verify that we can run the service with a small
        number of messages. The actual stats here are pretty meaningless since the number of messages is quite small.
        
48.336 seconds
{
  "consumer_performance": {
    "mb_per_sec": 1.2774,
    "records_per_sec": 13888.5942
  },
  "end_to_end_latency": {
    "latency_50th_ms": 1.0,
    "latency_999th_ms": 11.0,
    "latency_99th_ms": 4.0
  },
  "producer_performance": {
    "mb_per_sec": 1.09,
    "records_per_sec": 11467.889908
  }
}
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "old_broker_throttling_behavior": true,
  "quota_type": "client-id"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
2 minutes 59.292 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "consumer_num": 2,
  "quota_type": "client-id"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
3 minutes 9.090 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "override_quota": true,
  "quota_type": "(user, client-id)"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
3 minutes 25.090 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "old_client_throttling_behavior": true,
  "quota_type": "client-id"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
3 minutes 19.050 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "override_quota": false,
  "quota_type": "(user, client-id)"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
4 minutes 12.074 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": true,
  "broker_version": "0.10.0.1",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.468 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "override_quota": false,
  "quota_type": "client-id"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
4 minutes 5.927 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": true,
  "broker_version": "0.10.0.1",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
43.957 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": true,
  "broker_version": "0.10.0.1",
  "connect_protocol": "sessioned",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.547 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.10.1.1",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.823 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.10.1.1",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.016 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.10.1.1",
  "connect_protocol": "sessioned",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
47.966 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.10.2.2",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.171 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.10.2.2",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.558 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "override_quota": true,
  "quota_type": "client-id"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
3 minutes 16.909 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "override_quota": true,
  "quota_type": "user"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
3 minutes 13.804 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.11.0.3",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
40.352 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.10.2.2",
  "connect_protocol": "sessioned",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.458 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.11.0.3",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
38.421 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "0.11.0.3",
  "connect_protocol": "sessioned",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
44.399 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "1.0.2",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
39.284 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "1.0.2",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
39.747 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "1.1.1",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.314 seconds
Detail
Module: kafkatest.tests.client.quota_test
Class:  QuotaTest
Method: test_quota
Arguments:
{
  "override_quota": false,
  "quota_type": "user"
}
    These tests verify that quota provides expected functionality -- they run
    producer, broker, and consumer with different clientId and quota configuration and
    check that the observed throughput is close to the value we expect.
    
4 minutes 8.878 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "1.1.1",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
40.313 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.0.1",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
45.518 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.0.1",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.406 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.1.1",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
40.615 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.2.2",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.411 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.2.2",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.377 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.1.1",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
52.728 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.3.1",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.509 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "2.3.1",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.757 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "dev",
  "connect_protocol": "compatible",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
42.468 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "dev",
  "connect_protocol": "eager",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.314 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_broker_compatibility
Arguments:
{
  "auto_create_topics": false,
  "broker_version": "dev",
  "connect_protocol": "sessioned",
  "security_protocol": "PLAINTEXT"
}
        Verify that Connect will start up with various broker versions with various configurations. 
        When Connect distributed starts up, it either creates internal topics (v0.10.1.0 and after) 
        or relies upon the broker to auto-create the topics (v0.10.0.x and before).
        
41.308 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_and_resume_sink
Arguments:
{
  "connect_protocol": "compatible"
}
        Verify that sink connectors stop consuming records when paused and begin again after
        being resumed.
        
52.925 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_and_resume_sink
Arguments:
{
  "connect_protocol": "eager"
}
        Verify that sink connectors stop consuming records when paused and begin again after
        being resumed.
        
52.301 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_and_resume_sink
Arguments:
{
  "connect_protocol": "sessioned"
}
        Verify that sink connectors stop consuming records when paused and begin again after
        being resumed.
        
52.594 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_and_resume_source
Arguments:
{
  "connect_protocol": "compatible"
}
        Verify that source connectors stop producing records when paused and begin again after
        being resumed.
        
51.520 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_and_resume_source
Arguments:
{
  "connect_protocol": "eager"
}
        Verify that source connectors stop producing records when paused and begin again after
        being resumed.
        
50.559 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_and_resume_source
Arguments:
{
  "connect_protocol": "sessioned"
}
        Verify that source connectors stop producing records when paused and begin again after
        being resumed.
        
50.502 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_state_persistent
Arguments:
{
  "connect_protocol": "compatible"
}
        Verify that paused state is preserved after a cluster restart.
        
1 minute 9.033 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_connector
Arguments:
{
  "connect_protocol": "compatible"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
45.645 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_connector
Arguments:
{
  "connect_protocol": "eager"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
45.296 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_connector
Arguments:
{
  "connect_protocol": "sessioned"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
45.334 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_state_persistent
Arguments:
{
  "connect_protocol": "eager"
}
        Verify that paused state is preserved after a cluster restart.
        
1 minute 11.476 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_pause_state_persistent
Arguments:
{
  "connect_protocol": "sessioned"
}
        Verify that paused state is preserved after a cluster restart.
        
1 minute 10.147 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_task
Arguments:
{
  "connect_protocol": "compatible",
  "connector_type": "sink"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
45.424 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_task
Arguments:
{
  "connect_protocol": "eager",
  "connector_type": "sink"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
46.432 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_task
Arguments:
{
  "connect_protocol": "sessioned",
  "connector_type": "sink"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
47.654 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_task
Arguments:
{
  "connect_protocol": "compatible",
  "connector_type": "source"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
45.491 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_task
Arguments:
{
  "connect_protocol": "eager",
  "connector_type": "source"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
46.272 seconds
Detail
Module: kafkatest.tests.connect.connect_distributed_test
Class:  ConnectDistributedTest
Method: test_restart_failed_task
Arguments:
{
  "connect_protocol": "sessioned",
  "connector_type": "source"
}
    Simple test of Kafka Connect in distributed mode, producing data from files on one cluster and consuming it on
    another, validating the total output is identical to the input.
    
47.289 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_file_source_and_sink
Arguments:
{
  "converter": "org.apache.kafka.connect.json.JsonConverter",
  "schemas": false
}
        Validates basic end-to-end functionality of Connect standalone using the file source and sink converters. Includes
        parameterizations to test different converters (which also test per-connector converter overrides), schema/schemaless
        modes, and security support.
        
1 minute 3.381 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_file_source_and_sink
Arguments:
{
  "converter": "org.apache.kafka.connect.json.JsonConverter",
  "schemas": true
}
        Validates basic end-to-end functionality of Connect standalone using the file source and sink converters. Includes
        parameterizations to test different converters (which also test per-connector converter overrides), schema/schemaless
        modes, and security support.
        
1 minute 2.429 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_file_source_and_sink
Arguments:
{
  "converter": "org.apache.kafka.connect.storage.StringConverter",
  "schemas": null
}
        Validates basic end-to-end functionality of Connect standalone using the file source and sink converters. Includes
        parameterizations to test different converters (which also test per-connector converter overrides), schema/schemaless
        modes, and security support.
        
1 minute 2.649 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_file_source_and_sink
Arguments:
{
  "security_protocol": "PLAINTEXT"
}
        Validates basic end-to-end functionality of Connect standalone using the file source and sink converters. Includes
        parameterizations to test different converters (which also test per-connector converter overrides), schema/schemaless
        modes, and security support.
        
1 minute 1.974 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_skip_and_log_to_dlq
Arguments:
{
  "error_tolerance": "all"
}
    Simple test of Kafka Connect that produces data from a file in one
    standalone process and consumes it on another, validating the output is
    identical to the input.
    
49.420 seconds
Detail
Module: kafkatest.tests.connect.connect_test
Class:  ConnectStandaloneFileTest
Method: test_skip_and_log_to_dlq
Arguments:
{
  "error_tolerance": "none"
}
    Simple test of Kafka Connect that produces data from a file in one
    standalone process and consumes it on another, validating the output is
    identical to the input.
    
1 minute 4.003 seconds
Detail
Module: kafkatest.tests.core.delegation_token_test
Class:  DelegationTokenTest
Method: test_delegation_token_lifecycle
47.483 seconds
Detail
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_latency
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 1000,
  "task_name": "latency-100-rate-1000"
}
    These tests ensure that the network degrade Trogdor specs (which use "tc") are working as expected in whatever
    environment the system tests may be running in. The linux tools "ping" and "iperf" are used for validation
    and need to be available along with "tc" in the test environment.
    
43.723 seconds
Detail
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_latency
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 0,
  "task_name": "latency-100"
}
    These tests ensure that the network degrade Trogdor specs (which use "tc") are working as expected in whatever
    environment the system tests may be running in. The linux tools "ping" and "iperf" are used for validation
    and need to be available along with "tc" in the test environment.
    
43.229 seconds
Detail
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_rate
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 50,
  "rate_limit_kbit": 1000000,
  "task_name": "rate-1000-latency-50"
}
    These tests ensure that the network degrade Trogdor specs (which use "tc") are working as expected in whatever
    environment the system tests may be running in. The linux tools "ping" and "iperf" are used for validation
    and need to be available along with "tc" in the test environment.
    
44.939 seconds
Detail
Module: kafkatest.tests.core.network_degrade_test
Class:  NetworkDegradeTest
Method: test_rate
Arguments:
{
  "device_name": "eth0",
  "latency_ms": 0,
  "rate_limit_kbit": 1000000,
  "task_name": "rate-1000"
}
    These tests ensure that the network degrade Trogdor specs (which use "tc") are working as expected in whatever
    environment the system tests may be running in. The linux tools "ping" and "iperf" are used for validation
    and need to be available along with "tc" in the test environment.
    
45.952 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_PLAINTEXT"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
43.638 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "1.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
29.637 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_PLAINTEXT"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
36.919 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "2.0.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
29.882 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "SASL_SSL"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
46.035 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "2.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.175 seconds
Detail
Module: kafkatest.sanity_checks.test_console_consumer
Class:  ConsoleConsumerTest
Method: test_lifecycle
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SSL"
}
Check that console consumer starts/stops properly, and that we are capturing log output.
31.875 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SASL_SSL"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
54.579 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "2.2.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.105 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "2.3.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
32.357 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_compatible_brokers_eos_disabled
Arguments:
{
  "broker_version": "2.4.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.658 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers
Arguments:
{
  "broker_version": "0.10.0.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
26.682 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers
Arguments:
{
  "broker_version": "0.10.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
26.585 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers
Arguments:
{
  "broker_version": "0.10.2.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
26.762 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "0.11.0.3"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
29.952 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "1.0.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
27.941 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "1.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
27.851 seconds
Detail
Module: kafkatest.tests.tools.kibosh_test
Class:  KiboshTest
Method: test_kibosh_service
3.461 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "2.0.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.851 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "2.1.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.442 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "2.2.2"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.355 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "2.3.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
31.394 seconds
Detail
Module: kafkatest.tests.streams.streams_broker_compatibility_test
Class:  StreamsBrokerCompatibility
Method: test_fail_fast_on_incompatible_brokers_if_eos_beta_enabled
Arguments:
{
  "broker_version": "2.4.1"
}
    These tests validates that
    - Streams works for older brokers 0.11 (or newer)
    - Streams w/ EOS-alpha works for older brokers 0.11 (or newer)
    - Streams w/ EOS-beta works for older brokers 2.5 (or newer)
    - Streams fails fast for older brokers 0.10.0, 0.10.2, and 0.10.1
    - Streams w/ EOS-beta fails fast for older brokers 2.4 or older
    
30.595 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
27.008 seconds
Detail
Module: kafkatest.tests.tools.trogdor_test
Class:  TrogdorTest
Method: test_network_partition_fault
        Test that the network partition fault results in a true network partition between nodes.
        
11.362 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
30.443 seconds
Detail
Module: kafkatest.tests.tools.trogdor_test
Class:  TrogdorTest
Method: test_trogdor_service
        Test that we can bring up Trogdor and create a no-op fault.
        
11.472 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "SSL"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
35.860 seconds
Detail
Module: kafkatest.tests.tools.log4j_appender_test
Class:  Log4jAppenderTest
Method: test_log4j_appender
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SSL"
}
        Tests if KafkaLog4jAppender is producing to Kafka topic
        :return: None
        
41.887 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "producer_version": "0.10.0.1"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
28.897 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "producer_version": "0.10.1.1"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
31.083 seconds
Detail
Module: kafkatest.sanity_checks.test_kafka_version
Class:  KafkaVersionTest
Method: test_multi_version
Test kafka service node-versioning api - ensure we can bring up a 2-node cluster, one on version 0.8.2.X,
        the other on the current development branch.
38.594 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "producer_version": "0.8.2.2"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
30.445 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "producer_version": "dev",
  "security_protocol": "PLAINTEXT"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
29.043 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "producer_version": "0.9.0.1"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
29.489 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "security_protocol": "PLAINTEXT"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
35.030 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "security_protocol": "PLAINTEXT"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
30.693 seconds
Detail
Module: kafkatest.tests.tools.log_compaction_test
Class:  LogCompactionTest
Method: test_log_compaction
Arguments:
{
  "metadata_quorum": "ZK"
}
1 minute 20.546 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "COLOCATED_RAFT",
  "producer_version": "dev",
  "security_protocol": "SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
34.291 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_describe_consumer_group
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
        Tests if ConsumerGroupCommand is describing a consumer group correctly
        :return: None
        
23.969 seconds
Detail
Module: kafkatest.tests.tools.log_compaction_test
Class:  LogCompactionTest
Method: test_log_compaction
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT"
}
1 minute 27.363 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_describe_consumer_group
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
        Tests if ConsumerGroupCommand is describing a consumer group correctly
        :return: None
        
30.015 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "ZK",
  "producer_version": "dev",
  "security_protocol": "SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
35.136 seconds
Detail
Module: kafkatest.sanity_checks.test_verifiable_producer
Class:  TestVerifiableProducer
Method: test_simple_run
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "producer_version": "dev",
  "security_protocol": "SSL"
}
        Test that we can start VerifiableProducer on the current branch snapshot version or against the 0.8.2 jar, and
        verify that we can produce a small number of messages.
        
42.043 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_describe_consumer_group
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SSL"
}
        Tests if ConsumerGroupCommand is describing a consumer group correctly
        :return: None
        
36.884 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_list_consumer_groups
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "PLAINTEXT"
}
        Tests if ConsumerGroupCommand is listing correct consumer groups
        :return: None
        
22.808 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_describe_consumer_group
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "SSL"
}
        Tests if ConsumerGroupCommand is describing a consumer group correctly
        :return: None
        
31.445 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_list_consumer_groups
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "PLAINTEXT"
}
        Tests if ConsumerGroupCommand is listing correct consumer groups
        :return: None
        
28.693 seconds
Detail
Module: kafkatest.tests.streams.streams_shutdown_deadlock_test
Class:  StreamsShutdownDeadlockTest
Method: test_shutdown_wont_deadlock
        Start ShutdownDeadLockTest, wait for upt to 1 minute, and check that the process exited.
        If it hasn't exited then fail as it is deadlocked
        
31.750 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_list_consumer_groups
Arguments:
{
  "metadata_quorum": "ZK",
  "security_protocol": "SSL"
}
        Tests if ConsumerGroupCommand is listing correct consumer groups
        :return: None
        
31.999 seconds
Detail
Module: kafkatest.tests.core.consumer_group_command_test
Class:  ConsumerGroupCommandTest
Method: test_list_consumer_groups
Arguments:
{
  "metadata_quorum": "REMOTE_RAFT",
  "security_protocol": "SSL"
}
        Tests if ConsumerGroupCommand is listing correct consumer groups
        :return: None
        
37.021 seconds
Detail