ndt-dev - [ndt-dev] [ndt] r637 committed - Convert the test descriptions to numbered lists
Subject: NDT-DEV email list created
List archive
[ndt-dev] [ndt] r637 committed - Convert the test descriptions to numbered lists
Chronological Thread
- From:
- To:
- Subject: [ndt-dev] [ndt] r637 committed - Convert the test descriptions to numbered lists
- Date: Mon, 19 Sep 2011 17:48:22 +0000
Revision: 637
Author:
Date: Mon Sep 19 10:47:12 2011
Log: Convert the test descriptions to numbered lists
http://code.google.com/p/ndt/source/detail?r=637
Modified:
/wiki/NDTTestMethodology.wiki
=======================================
--- /wiki/NDTTestMethodology.wiki Fri Sep 16 12:05:23 2011
+++ /wiki/NDTTestMethodology.wiki Mon Sep 19 10:47:12 2011
@@ -34,33 +34,29 @@
=== Middlebox Test ===
-The middlebox test is a short throughput test from the server to the client with a limited CWND ([http://en.wikipedia.org/wiki/Congestion_window congestion window] - one of the factors that determines the number of bytes that can be outstanding at any time) to check for a duplex mismatch condition. Moreover, this test uses a pre-defined MSS value to check if any intermediate node will modify its connection settings.
+The middlebox test is a short throughput test from the server to the client with a limited Congestion Window ([http://en.wikipedia.org/wiki/Congestion_window congestion window] - one of the factors that determines the number of bytes that can be outstanding at any time) to check for a duplex mismatch condition. Moreover, this test uses a pre-defined Maximum Segment Size (MSS) to check if any intermediate node is modifying the connection settings.
A detailed description of all of the Middlebox protocol messages can be found in the [NDTProtocol#Middlebox_test NDT Protocol document].
-As a first step the server binds an ephemeral port and notify the client about this port number. The server also sets MSS on this port to 1456 (a strange value that it is unlikely a routers will have been tested with, so this also tests that they can handle such weird MSS sizes).
-
-Next, the client connects to the server's ephemeral port. When the connection is successfully established, the server sets the maximum value of the congestion window for this connection to `2 * (The current maximum segment size (MSS))`.
-
-In the next step the server starts a 5 seconds throughput test using the newly created connection. The NDT server sends packets as fast as possible (i.e. without any delays) during the test. These packets are written using the buffer of the following size: `(The current maximum segment size (MSS))`. If NDT is unable to allocate an appropriate-sized buffer (i.e. malloc() fails), the server uses a 8192 Byte one. The buffer contains a pre-generated pseudo random data (including only US-ASCII printable characters).
-
-The server can temporarily stop sending packets when the following formula is fulfilled:
-{{{
-BUFFER_SIZE * 16 < ((Next Sequence Number To Be Sent) - (Oldest Unacknowledged Sequence Number) - 1)
-}}}
-
-The both `"Next Sequence Number To Be Sent"` and `"Oldest Unacknowledged Sequence Number"` values are obtained from the connection with the help of the [http://www.web100.org/ web100] library.
-
-When the 5 seconds throughput test is over, the server sends the following results to the client:
-
-|| CurMSS || The current maximum segment size (MSS), in octets.||
-|| !WinScaleSent || The value of the transmitted window scale option if one was sent; otherwise, a value of -1. ||
-|| !WinScaleRcvd || The value of the received window scale option if one was received; otherwise, a value of -1. ||
-
-Next, the client sends its calculated throughput value to the server. The throughput value is calculated by taking the received bytes over the duration of the test. This value, in Bps, is then converted to kbps. This can be shown by the following formula:
-{{{
-THROUGHPUT_VALUE = (RECEIVED_BYTES / TEST_DURATION_SECONDS) * 8 / 1000
-}}}
+ # The server selects a random port and opens a socket on that port
+ # The server sends this port number to the client
+ # The server sets the MSS on this port to 1456
+ # The client creates a connection to the server's port
+ # The server sets the congestion window of the connection to be `2 * (The current MSS)`
+ # The server performs a 5 second throughput test over the connection
+ # The server can temporarily stop sending packets when the following formula is fulfilled:
+ {{{
+ BUFFER_SIZE * 16 < ((Next Sequence Number To Be Sent) - (Oldest Unacknowledged Sequence Number) - 1)
+ }}}
+ The both `"Next Sequence Number To Be Sent"` and `"Oldest Unacknowledged Sequence Number"` values are obtained from the connection with the help of the [http://www.web100.org/ web100] library.
+ # After the throughput test, the server sends the following results to the client:
+ || CurMSS || The current maximum segment size (MSS), in octets.||
+ || !WinScaleSent || The value of the transmitted window scale option if one was sent; otherwise, a value of -1. ||
+ || !WinScaleRcvd || The value of the received window scale option if one was received; otherwise, a value of -1. ||
+ # After the client has received the results, it sends its calculated throughput value to the server. The throughput value is calculated by taking the received bytes over the duration of the test. This value, in Bps, is then converted to kbps. This can be shown by the following formula:
+ {{{
+ THROUGHPUT_VALUE = (RECEIVED_BYTES / TEST_DURATION_SECONDS) * 8 / 1000
+ }}}
==== Known issues (Middlebox Test) ====
@@ -74,23 +70,19 @@
A detailed description of all of the SFW protocol messages can be found in the [NDTProtocol#Simple_firewall_test NDT Protocol document].
-As a first step both NDT components (the server and the client) bind an ephemeral port and notifies the second component about this port number. In the second step both NDT components are executing in parallel:
- * The client is trying to connect to the server's ephemeral port and send a TEST_MSG message containing a pre-defined string "Simple firewall test" of length 20 using the newly created connection.
- * The server is trying to connect to the client's ephemeral port and send a TEST_MSG message containing a pre-defined string "Simple firewall test" of length 20 using the newly created connection.
-
-Both client and server are waiting for a valid connection a limited amount of time. If the MaxRTT or MaxRTO is greater than 3 seconds, than the time limit in the SFW test is 3 seconds. Otherwise the time limit in the SFW test is 1 second.
-
-The test is finished after the connection will be accepted or the time limit will be exceeded. If the time limit is exceeded, the firewall probably exists somewhere on the end-to-end path. If there is a connection and the pre-defined string is properly transferred, then there is probably no firewall on the end-to-end path (technically there still could be a firewall with a range of opened ports or a special rules that allowed this one particular connection to the ephemeral port). The third possibility is that there is a successful connection, but the expected pre-defined string is not transferred. This case does not adjudicate about the firewall existence.
-
-In the last step the server sends its results to the client.
-
-The possible simple firewall test result codes:
-
+ # The server selects a random port and opens a socket on that port
+ # The server sends this port number to the client
+ # The client selects a random port and opens a socket on that port
+ # The client sends this port number to the server
+ # In parallel, the client and server tries to connect to the other component's ports
+ # When the client or server connects to the other component, it sends a TEST_MSG message containing a pre-defined string "Simple firewall test" over the new connection.
+ # If, after waiting for 3 seconds (or 1 second if MaxRTT or MaxRTO is less than 3 seconds), the client or server closes the port
+ # Once the server has finished connecting to the client and waiting for the client to connect, it sends its results to the client with one of the following values:
|| *Value* || *Description* ||
-|| "0" || Test was not started/results were not received (this means an error condition like protocol error, which cannot happen during normal operation) ||
-|| "1" || Test was successful (i.e. connection to the ephemeral port was possible and the pre-defined string was received) ||
-|| "2" || There was a connection to the ephemeral port, but the pre-defined string was not received ||
-|| "3" || There was no connection to the ephemeral port within the specified time ||
+|| "0" || The test was not started/results were not received (this means an error condition like the client sending the wrong control messages, or an error on the server like an Out-Of-Memory error that prevented it from running the test) ||
+|| "1" || The test was successful (i.e. connection to the random port was possible and the pre-defined string was received) ||
+|| "2" || There was a connection to the random port, but the pre-defined string was not received ||
+|| "3" || There was no connection to the random port within the specified time ||
=== Client-To-Server Throughput Test ===
@@ -98,19 +90,19 @@
A detailed description of all of the Client-To-Server protocol messages can be found in the [NDTProtocol#C2S_throughput_test NDT Protocol document].
-As a first step the server binds a new port and notifies the client about this port number.
-
-Next, the client connects to the server's newly bound port. When the connection is successfully established, the server initializes the following routines:
- * libpcap routines to perform packet trace used by the [NDTTestMethodology#Bottleneck_Link_Detection Bottleneck Link Detection] algorithm.
- * [NDTDataFormat#tcpdump_trace tcpdump trace] to save to a standard tcpdump file all packets sent during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] on the newly created connection. This tcpdump trace dump is only started when the `-t, --tcpdump` options are set. The use of tcpdump duplicates work being done by the libpcap trace above. However, this approach simplifies the NDT codebase.
- * [NDTDataFormat#web100_snaplog_trace web100 snaplog trace] to dump web100 kernel MIB variables' values written in a fixed time interval (default is 5 msec) during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] for the newly created connection. This snaplog trace dump is only started when the `--snaplog` option is set.
-
-In the next step the client starts a 10 seconds throughput test using the newly created connection. The NDT client sends packets as fast as possible (i.e. without any delays) during the test. These packets are written using the 8192 Byte buffer containing a pre-generated pseudo random data (including only US-ASCII printable characters).
-
-When the 10 seconds throughput test is over, the server sends its calculated throughput value to the client. The throughput value is calculated by taking the received bytes over the duration of the test. This value, in Bps, is then converted to kbps. This can be shown by the following formula:
-{{{
-THROUGHPUT_VALUE = (RECEIVED_BYTES / TEST_DURATION_SECONDS) * 8 / 1000
-}}}
+ # The server selects a random port and opens a socket on that port
+ # The server sends this port number to the client
+ # The client connects to the port the server opened
+ # The server starts one or more of the following routines:
+ * libpcap routines to perform packet trace used by the [NDTTestMethodology#Bottleneck_Link_Detection Bottleneck Link Detection] algorithm.
+ * [NDTDataFormat#tcpdump_trace tcpdump trace] to save to a standard tcpdump file all packets sent during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] on the newly created connection. This tcpdump trace dump is only started when the `-t, --tcpdump` options are set. The use of tcpdump duplicates work being done by the libpcap trace above. However, this approach simplifies the NDT codebase.
+ * [NDTDataFormat#web100_snaplog_trace web100 snaplog trace] to dump web100 kernel MIB variables' values written in a fixed time interval (default is 5 msec) during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] for the newly created connection. This snaplog trace dump is only started when the `--snaplog` option is set.
+ # The client performs a 10 second throughput test over the newly created connection
+ # The server calculates its throughput, in Kbps, according to the following formula:
+ {{{
+ THROUGHPUT_VALUE = (RECEIVED_BYTES / TEST_DURATION_SECONDS) * 8 / 1000
+ }}}
+ # The server sends the calculated throughput value to the client
==== Known Limitations (Client-To-Server Throughput Test) ====
@@ -122,22 +114,21 @@
A detailed description of all of the Server-To-Client protocol messages can be found in the [NDTProtocol#S2C_throughput_test NDT Protocol document].
-As a first step the server binds a new port and notifies the client about this port number.
-
-Next, the client connects to the server's newly bound port. When the connection is successfully established, the server initializes the following routines:
- * libpcap routines to perform packet trace used by the [NDTTestMethodology#Bottleneck_Link_Detection Bottleneck Link Detection] algorithm.
- * [NDTDataFormat#tcpdump_trace tcpdump trace] to save to a standard tcpdump file all packets sent during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] on the newly created connection. This tcpdump trace dump is only started when the `-t, --tcpdump` options are set. The use of tcpdump duplicates work being done by the libpcap trace above. However, this approach simplifies the NDT codebase.
- * [NDTDataFormat#web100_snaplog_trace web100 snaplog trace] to dump web100 kernel MIB variables' values written in a fixed time (default is 5 msec) increments during the [NDTTestMethodology#Server-To-Client_Throughput_Test Server-To-Client throughput test] for the newly created connection. This snaplog trace dump is only started when the `--snaplog` option is set.
-
-In the next step the server starts a 10 seconds throughput test using the newly created connection. The NDT server sends packets as fast as possible (i.e. without any delays) during the test. These packets are written using the 8192 Byte buffer containing a pre-generated pseudo random data (including only US-ASCII printable characters).
-
-When the 10 seconds throughput test is over, the server sends to the client its calculated throughput value, the amount of unsent data in the socket send queue and the total number of bytes the application believed it had sent, as returned from calls to the 'send' syscall. The throughput value is calculated by taking that total number of bytes and dividing by the duration of the test. This value, in Bps, is then converted to kbps. This can be shown by the following formula:
-
-{{{
-THROUGHPUT_VALUE = (TRANSMITTED_BYTES / TEST_DURATION_SECONDS) * 8 / 1000
-}}}
-
-Additionally, at the end of the Server-To-Client throughput test, the server takes a web100 snapshot and sends all the web100 data variables to the client.
+ # The server selects a random port and opens a socket on that port
+ # The server sends this port number to the client
+ # The client connects to the port the server opened
+ # The server starts one or more of the following routines:
+ * libpcap routines to perform packet trace used by the [NDTTestMethodology#Bottleneck_Link_Detection Bottleneck Link Detection] algorithm.
+ * [NDTDataFormat#tcpdump_trace tcpdump trace] to save to a standard tcpdump file all packets sent during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] on the newly created connection. This tcpdump trace dump is only started when the `-t, --tcpdump` options are set. The use of tcpdump duplicates work being done by the libpcap trace above. However, this approach simplifies the NDT codebase.
+ * [NDTDataFormat#web100_snaplog_trace web100 snaplog trace] to dump web100 kernel MIB variables' values written in a fixed time interval (default is 5 msec) during the [NDTTestMethodology#Client-To-Server_Throughput_Test Client-To-Server throughput test] for the newly created connection. This snaplog trace dump is only started when the `--snaplog` option is set.
+ # The client performs a 10 second throughput test over the newly created connection
+ # The server takes a web100 snapshot
+ # The server calculates its throughput, in Kbps, according to the following formula:
+ {{{
+ THROUGHPUT_VALUE = (BYTES_SENT_TO_SEND_SYSCALL / TEST_DURATION_SECONDS) * 8 / 1000
+ }}}
+ # The server sends to the client its calculated throughput value, the amount of unsent data in the socket send queue and the total number of bytes the application sent to the send syscall
+ # The server sends to the client all the web100 variables it collected in the final snapshot
==== Known Limitations (Server-To-Client Throughput Test) ====
@@ -169,16 +160,16 @@
The bins are defined in mbits/second:
- * 0 < calculated speed (mbits/second) <= 0.01 - *RTT*
- * 0.01 < calculated speed (mbits/second) <= 0.064 - *Dial-up Modem*
- * 0.064 < calculated speed (mbits/second) <= 1.5 - *Cable/DSL modem*
- * 1.5 < calculated speed (mbits/second) <= 10 - *10 Mbps Ethernet or !WiFi 11b subnet*
- * 10 < calculated speed (mbits/second) <= 40 - *45 Mbps T3/DS3 or !WiFi 11 a/g subnet*
- * 40 < calculated speed (mbits/second) <= 100 - *100 Mbps Fast Ethernet subnet*
- * 100 < calculated speed (mbits/second) <= 622 - *a 622 Mbps OC-12 subnet*
- * 622 < calculated speed (mbits/second) <= 1000 - *1.0 Gbps Gigabit Ethernet subnet*
- * 1000 < calculated speed (mbits/second) <= 2400 - *2.4 Gbps OC-48 subnet*
- * 2400 < calculated speed (mbits/second) <= 10000 - *10 Gbps 10 Gigabit Ethernet/OC-192 subnet*
+ * 0 < inter-packet throughput (mbits/second) <= 0.01 - *RTT*
+ * 0.01 < inter-packet throughput (mbits/second) <= 0.064 - *Dial-up Modem*
+ * 0.064 < inter-packet throughput (mbits/second) <= 1.5 - *Cable/DSL modem*
+ * 1.5 < inter-packet throughput (mbits/second) <= 10 - *10 Mbps Ethernet or !WiFi 11b subnet*
+ * 10 < inter-packet throughput (mbits/second) <= 40 - *45 Mbps T3/DS3 or !WiFi 11 a/g subnet*
+ * 40 < inter-packet throughput (mbits/second) <= 100 - *100 Mbps Fast Ethernet subnet*
+ * 100 < inter-packet throughput (mbits/second) <= 622 - *a 622 Mbps OC-12 subnet*
+ * 622 < inter-packet throughput (mbits/second) <= 1000 - *1.0 Gbps Gigabit Ethernet subnet*
+ * 1000 < inter-packet throughput (mbits/second) <= 2400 - *2.4 Gbps OC-48 subnet*
+ * 2400 < inter-packet throughput (mbits/second) <= 10000 - *10 Gbps 10 Gigabit Ethernet/OC-192 subnet*
* bits cannot be determined - *Retransmissions* (this bin counts the duplicated or invalid packets and does not denote a real link type)
* otherwise - ?
- [ndt-dev] [ndt] r637 committed - Convert the test descriptions to numbered lists, ndt, 09/19/2011
Archive powered by MHonArc 2.6.16.