<feed xmlns='http://www.w3.org/2005/Atom'>
<title>xavi/android_kernel_m2note/net/ipv4/tcp_input.c, branch ng-7.1.2</title>
<subtitle>Unnamed repository; edit this file 'description' to name the repository.
</subtitle>
<id>https://gitea.privatedns.org/xavi/android_kernel_m2note/atom?h=ng-7.1.2</id>
<link rel='self' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/atom?h=ng-7.1.2'/>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/'/>
<updated>2019-07-18T18:56:08+00:00</updated>
<entry>
<title>tcp: limit payload size of sacked skbs</title>
<updated>2019-07-18T18:56:08+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2019-06-16T00:31:03+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=dfd57517dc90b6158a281266aeee893fd2909076'/>
<id>urn:sha1:dfd57517dc90b6158a281266aeee893fd2909076</id>
<content type='text'>
commit 3b4929f65b0d8249f19a50245cd88ed1a2f78cff upstream.

Jonathan Looney reported that TCP can trigger the following crash
in tcp_shifted_skb() :

	BUG_ON(tcp_skb_pcount(skb) &lt; pcount);

This can happen if the remote peer has advertized the smallest
MSS that linux TCP accepts : 48

An skb can hold 17 fragments, and each fragment can hold 32KB
on x86, or 64KB on PowerPC.

This means that the 16bit witdh of TCP_SKB_CB(skb)-&gt;tcp_gso_segs
can overflow.

Note that tcp_sendmsg() builds skbs with less than 64KB
of payload, so this problem needs SACK to be enabled.
SACK blocks allow TCP to coalesce multiple skbs in the retransmit
queue, thus filling the 17 fragments to maximal capacity.

CVE-2019-11477 -- u16 overflow of TCP_SKB_CB(skb)-&gt;tcp_gso_segs

Backport notes, provided by Joao Martins &lt;joao.m.martins@oracle.com&gt;

v4.15 or since commit 737ff314563 ("tcp: use sequence distance to
detect reordering") had switched from the packet-based FACK tracking and
switched to sequence-based.

v4.14 and older still have the old logic and hence on
tcp_skb_shift_data() needs to retain its original logic and have
@fack_count in sync. In other words, we keep the increment of pcount with
tcp_skb_pcount(skb) to later used that to update fack_count. To make it
more explicit we track the new skb that gets incremented to pcount in
@next_pcount, and we get to avoid the constant invocation of
tcp_skb_pcount(skb) all together.

Fixes: 832d11c5cd07 ("tcp: Try to restore large SKBs while SACK processing")
Change-Id: Ia549e9b12cd033edd93f90e13c6c0e255f74c399
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Reported-by: Jonathan Looney &lt;jtl@netflix.com&gt;
Acked-by: Neal Cardwell &lt;ncardwell@google.com&gt;
Reviewed-by: Tyler Hicks &lt;tyhicks@canonical.com&gt;
Cc: Yuchung Cheng &lt;ycheng@google.com&gt;
Cc: Bruce Curtis &lt;brucec@netflix.com&gt;
Cc: Jonathan Lemon &lt;jonathan.lemon@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>tcp: don't read out-of-bounds opsize</title>
<updated>2019-05-03T16:58:41+00:00</updated>
<author>
<name>Jann Horn</name>
<email>jannh@google.com</email>
</author>
<published>2018-04-20T13:57:30+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=1eb69cd5ada06fcf4c0348b7e2fef2d4349d3369'/>
<id>urn:sha1:1eb69cd5ada06fcf4c0348b7e2fef2d4349d3369</id>
<content type='text'>
commit 7e5a206ab686f098367b61aca989f5cdfa8114a3 upstream.

The old code reads the "opsize" variable from out-of-bounds memory (first
byte behind the segment) if a broken TCP segment ends directly after an
opcode that is neither EOL nor NOP.

The result of the read isn't used for anything, so the worst thing that
could theoretically happen is a pagefault; and since the physmap is usually
mostly contiguous, even that seems pretty unlikely.

The following C reproducer triggers the uninitialized read - however, you
can't actually see anything happen unless you put something like a
pr_warn() in tcp_parse_md5sig_option() to print the opsize.

====================================

void systemf(const char *command, ...) {
  char *full_command;
  va_list ap;
  va_start(ap, command);
  if (vasprintf(&amp;full_command, command, ap) == -1)
    err(1, "vasprintf");
  va_end(ap);
  printf("systemf: &lt;&lt;&lt;%s&gt;&gt;&gt;\n", full_command);
  system(full_command);
}

char *devname;

int tun_alloc(char *name) {
  int fd = open("/dev/net/tun", O_RDWR);
  if (fd == -1)
    err(1, "open tun dev");
  static struct ifreq req = { .ifr_flags = IFF_TUN|IFF_NO_PI };
  strcpy(req.ifr_name, name);
  if (ioctl(fd, TUNSETIFF, &amp;req))
    err(1, "TUNSETIFF");
  devname = req.ifr_name;
  printf("device name: %s\n", devname);
  return fd;
}

void sum_accumulate(unsigned int *sum, void *data, int len) {
  assert((len&amp;2)==0);
  for (int i=0; i&lt;len/2; i++) {
    *sum += ntohs(((unsigned short *)data)[i]);
  }
}

unsigned short sum_final(unsigned int sum) {
  sum = (sum &gt;&gt; 16) + (sum &amp; 0xffff);
  sum = (sum &gt;&gt; 16) + (sum &amp; 0xffff);
  return htons(~sum);
}

void fix_ip_sum(struct iphdr *ip) {
  unsigned int sum = 0;
  sum_accumulate(&amp;sum, ip, sizeof(*ip));
  ip-&gt;check = sum_final(sum);
}

void fix_tcp_sum(struct iphdr *ip, struct tcphdr *tcp) {
  unsigned int sum = 0;
  struct {
    unsigned int saddr;
    unsigned int daddr;
    unsigned char pad;
    unsigned char proto_num;
    unsigned short tcp_len;
  } fakehdr = {
    .saddr = ip-&gt;saddr,
    .daddr = ip-&gt;daddr,
    .proto_num = ip-&gt;protocol,
    .tcp_len = htons(ntohs(ip-&gt;tot_len) - ip-&gt;ihl*4)
  };
  sum_accumulate(&amp;sum, &amp;fakehdr, sizeof(fakehdr));
  sum_accumulate(&amp;sum, tcp, tcp-&gt;doff*4);
  tcp-&gt;check = sum_final(sum);
}

int main(void) {
  int tun_fd = tun_alloc("inject_dev%d");
  systemf("ip link set %s up", devname);
  systemf("ip addr add 192.168.42.1/24 dev %s", devname);

  struct {
    struct iphdr ip;
    struct tcphdr tcp;
    unsigned char tcp_opts[20];
  } __attribute__((packed)) syn_packet = {
    .ip = {
      .ihl = sizeof(struct iphdr)/4,
      .version = 4,
      .tot_len = htons(sizeof(syn_packet)),
      .ttl = 30,
      .protocol = IPPROTO_TCP,
      /* FIXUP check */
      .saddr = IPADDR(192,168,42,2),
      .daddr = IPADDR(192,168,42,1)
    },
    .tcp = {
      .source = htons(1),
      .dest = htons(1337),
      .seq = 0x12345678,
      .doff = (sizeof(syn_packet.tcp)+sizeof(syn_packet.tcp_opts))/4,
      .syn = 1,
      .window = htons(64),
      .check = 0 /*FIXUP*/
    },
    .tcp_opts = {
      /* INVALID: trailing MD5SIG opcode after NOPs */
      1, 1, 1, 1, 1,
      1, 1, 1, 1, 1,
      1, 1, 1, 1, 1,
      1, 1, 1, 1, 19
    }
  };
  fix_ip_sum(&amp;syn_packet.ip);
  fix_tcp_sum(&amp;syn_packet.ip, &amp;syn_packet.tcp);
  while (1) {
    int write_res = write(tun_fd, &amp;syn_packet, sizeof(syn_packet));
    if (write_res != sizeof(syn_packet))
      err(1, "packet write failed");
  }
}
====================================

Fixes: cfb6eeb4c860 ("[TCP]: MD5 Signature Option (RFC2385) support.")
Change-Id: Id94ac39b815531dd2af138bcc5e65be2652ec9f2
Signed-off-by: Jann Horn &lt;jannh@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
</content>
</entry>
<entry>
<title>UPSTREAM: tcp: detect malicious patterns in tcp_collapse_ofo_queue()</title>
<updated>2018-11-27T11:48:00+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2018-07-27T10:27:07+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=406a9a01c1fab916894abb4a772103e51c6a6551'/>
<id>urn:sha1:406a9a01c1fab916894abb4a772103e51c6a6551</id>
<content type='text'>
[ Upstream commit 3d4bf93ac12003f9b8e1e2de37fe27983deebdcf ]

In case an attacker feeds tiny packets completely out of order,
tcp_collapse_ofo_queue() might scan the whole rb-tree, performing
expensive copies, but not changing socket memory usage at all.

1) Do not attempt to collapse tiny skbs.
2) Add logic to exit early when too many tiny skbs are detected.

We prefer not doing aggressive collapsing (which copies packets)
for pathological flows, and revert to tcp_prune_ofo_queue() which
will be less expensive.

In the future, we might add the possibility of terminating flows
that are proven to be malicious.

Change-Id: I635a058ea387b224d1d0ac7653cc4dfc0aadab3a
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Acked-by: Soheil Hassas Yeganeh &lt;soheil@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@google.com&gt;
Signed-off-by: Chenbo Feng &lt;fengc@google.com&gt;
</content>
</entry>
<entry>
<title>UPSTREAM: tcp: avoid collapses in tcp_prune_queue() if possible</title>
<updated>2018-11-27T11:47:40+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2018-07-27T10:27:06+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=827be32ec5bd49d3f8386b87c247d5685eadaa25'/>
<id>urn:sha1:827be32ec5bd49d3f8386b87c247d5685eadaa25</id>
<content type='text'>
[ Upstream commit f4a3313d8e2ca9fd8d8f45e40a2903ba782607e7 ]

Right after a TCP flow is created, receiving tiny out of order
packets allways hit the condition :

if (atomic_read(&amp;sk-&gt;sk_rmem_alloc) &gt;= sk-&gt;sk_rcvbuf)
	tcp_clamp_window(sk);

tcp_clamp_window() increases sk_rcvbuf to match sk_rmem_alloc
(guarded by tcp_rmem[2])

Calling tcp_collapse_ofo_queue() in this case is not useful,
and offers a O(N^2) surface attack to malicious peers.

Better not attempt anything before full queue capacity is reached,
forcing attacker to spend lots of resource and allow us to more
easily detect the abuse.

Change-Id: Ib4fabbd6f22b51fd6eea66a0f3b210543d3ebe01
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Acked-by: Soheil Hassas Yeganeh &lt;soheil@google.com&gt;
Acked-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@google.com&gt;
Signed-off-by: Chenbo Feng &lt;fengc@google.com&gt;
</content>
</entry>
<entry>
<title>tcp: fix a compile error in DBGUNDO()</title>
<updated>2017-12-05T17:06:05+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2016-09-23T00:54:00+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=a4146f564ccf47e7fc04a8a30b9a308af76a4ed5'/>
<id>urn:sha1:a4146f564ccf47e7fc04a8a30b9a308af76a4ed5</id>
<content type='text'>
If DBGUNDO() is enabled (FASTRETRANS_DEBUG &gt; 1), a compile
error will happen, since inet6_sk(sk)-&gt;daddr became sk-&gt;sk_v6_daddr

Fixes: efe4208f47f9 ("ipv6: make lookups simpler and faster")
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>tcp: when rearming RTO, if RTO time is in past then fire RTO ASAP</title>
<updated>2017-11-06T14:29:40+00:00</updated>
<author>
<name>Neal Cardwell</name>
<email>ncardwell@google.com</email>
</author>
<published>2017-08-16T21:53:36+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=8b3e46bb077b62a006a23d2ea1ba228b5829ddd2'/>
<id>urn:sha1:8b3e46bb077b62a006a23d2ea1ba228b5829ddd2</id>
<content type='text'>
commit cdbeb633ca71a02b7b63bfeb94994bf4e1a0b894 upstream.

In some situations tcp_send_loss_probe() can realize that it's unable
to send a loss probe (TLP), and falls back to calling tcp_rearm_rto()
to schedule an RTO timer. In such cases, sometimes tcp_rearm_rto()
realizes that the RTO was eligible to fire immediately or at some
point in the past (delta_us &lt;= 0). Previously in such cases
tcp_rearm_rto() was scheduling such "overdue" RTOs to happen at now +
icsk_rto, which caused needless delays of hundreds of milliseconds
(and non-linear behavior that made reproducible testing
difficult). This commit changes the logic to schedule "overdue" RTOs
ASAP, rather than at now + icsk_rto.

Fixes: 6ba8a3b19e76 ("tcp: Tail loss probe (TLP)")
Suggested-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: Neal Cardwell &lt;ncardwell@google.com&gt;
Signed-off-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
[wt: no need for usec_to_jiffies conversion in 3.10]

Signed-off-by: Willy Tarreau &lt;w@1wt.eu&gt;
</content>
</entry>
<entry>
<title>tcp: avoid setting cwnd to invalid ssthresh after cwnd reduction states</title>
<updated>2017-11-06T14:29:37+00:00</updated>
<author>
<name>Yuchung Cheng</name>
<email>ycheng@google.com</email>
</author>
<published>2017-08-01T20:22:32+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=1b6bfc51d146fc965c06d9d912c967ab4cdca6ff'/>
<id>urn:sha1:1b6bfc51d146fc965c06d9d912c967ab4cdca6ff</id>
<content type='text'>
commit ed254971edea92c3ac5c67c6a05247a92aa6075e upstream.

If the sender switches the congestion control during ECN-triggered
cwnd-reduction state (CA_CWR), upon exiting recovery cwnd is set to
the ssthresh value calculated by the previous congestion control. If
the previous congestion control is BBR that always keep ssthresh
to TCP_INIFINITE_SSTHRESH, cwnd ends up being infinite. The safe
step is to avoid assigning invalid ssthresh value when recovery ends.

Signed-off-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: Neal Cardwell &lt;ncardwell@google.com&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Willy Tarreau &lt;w@1wt.eu&gt;
</content>
</entry>
<entry>
<title>tcp: fix xmit timer to only be reset if data ACKed/SACKed</title>
<updated>2017-11-06T14:17:50+00:00</updated>
<author>
<name>Neal Cardwell</name>
<email>ncardwell@google.com</email>
</author>
<published>2017-07-27T01:43:27+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=8b046d9a8dbb45f821594e29438eba49c50e3156'/>
<id>urn:sha1:8b046d9a8dbb45f821594e29438eba49c50e3156</id>
<content type='text'>
commit df92c8394e6ea0469e8056946ef8add740ab8046 upstream.

Fix a TCP loss recovery performance bug raised recently on the netdev
list, in two threads:

(i)  July 26, 2017: netdev thread "TCP fast retransmit issues"
(ii) July 26, 2017: netdev thread:
     "[PATCH V2 net-next] TLP: Don't reschedule PTO when there's one
     outstanding TLP retransmission"

The basic problem is that incoming TCP packets that did not indicate
forward progress could cause the xmit timer (TLP or RTO) to be rearmed
and pushed back in time. In certain corner cases this could result in
the following problems noted in these threads:

 - Repeated ACKs coming in with bogus SACKs corrupted by middleboxes
   could cause TCP to repeatedly schedule TLPs forever. We kept
   sending TLPs after every ~200ms, which elicited bogus SACKs, which
   caused more TLPs, ad infinitum; we never fired an RTO to fill in
   the holes.

 - Incoming data segments could, in some cases, cause us to reschedule
   our RTO or TLP timer further out in time, for no good reason. This
   could cause repeated inbound data to result in stalls in outbound
   data, in the presence of packet loss.

This commit fixes these bugs by changing the TLP and RTO ACK
processing to:

 (a) Only reschedule the xmit timer once per ACK.

 (b) Only reschedule the xmit timer if tcp_clean_rtx_queue() deems the
     ACK indicates sufficient forward progress (a packet was
     cumulatively ACKed, or we got a SACK for a packet that was sent
     before the most recent retransmit of the write queue head).

This brings us back into closer compliance with the RFCs, since, as
the comment for tcp_rearm_rto() notes, we should only restart the RTO
timer after forward progress on the connection. Previously we were
restarting the xmit timer even in these cases where there was no
forward progress.

As a side benefit, this commit simplifies and speeds up the TCP timer
arming logic. We had been calling inet_csk_reset_xmit_timer() three
times on normal ACKs that cumulatively acknowledged some data:

1) Once near the top of tcp_ack() to switch from TLP timer to RTO:
        if (icsk-&gt;icsk_pending == ICSK_TIME_LOSS_PROBE)
               tcp_rearm_rto(sk);

2) Once in tcp_clean_rtx_queue(), to update the RTO:
        if (flag &amp; FLAG_ACKED) {
               tcp_rearm_rto(sk);

3) Once in tcp_ack() after tcp_fastretrans_alert() to switch from RTO
   to TLP:
        if (icsk-&gt;icsk_pending == ICSK_TIME_RETRANS)
               tcp_schedule_loss_probe(sk);

This commit, by only rescheduling the xmit timer once per ACK,
simplifies the code and reduces CPU overhead.

This commit was tested in an A/B test with Google web server
traffic. SNMP stats and request latency metrics were within noise
levels, substantiating that for normal web traffic patterns this is a
rare issue. This commit was also tested with packetdrill tests to
verify that it fixes the timer behavior in the corner cases discussed
in the netdev threads mentioned above.

This patch is a bug fix patch intended to be queued for -stable
relases.

[This version of the commit was compiled and briefly tested
based on top of v3.10.107.]

Change-Id: If0417380fd59290b65cf04a415373aa13dd1dad7
Fixes: 6ba8a3b19e76 ("tcp: Tail loss probe (TLP)")
Reported-by: Klavs Klavsen &lt;kl@vsen.dk&gt;
Reported-by: Mao Wenan &lt;maowenan@huawei.com&gt;
Signed-off-by: Neal Cardwell &lt;ncardwell@google.com&gt;
Signed-off-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: Nandita Dukkipati &lt;nanditad@google.com&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: Willy Tarreau &lt;w@1wt.eu&gt;
</content>
</entry>
<entry>
<title>tcp: introduce tcp_rto_delta_us() helper for xmit timer fix</title>
<updated>2017-11-06T14:17:44+00:00</updated>
<author>
<name>Neal Cardwell</name>
<email>ncardwell@google.com</email>
</author>
<published>2017-07-27T14:01:20+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=9de2da5fe959a00373414eada75c94cdd9a3af55'/>
<id>urn:sha1:9de2da5fe959a00373414eada75c94cdd9a3af55</id>
<content type='text'>
commit e1a10ef7fa876f8510aaec36ea5c0cf34baba410 upstream.

Pure refactor. This helper will be required in the xmit timer fix
later in the patch series. (Because the TLP logic will want to make
this calculation.)

[This version of the commit was compiled and briefly tested
based on top of v3.10.107.]

Change-Id: I1ccfba0b00465454bf5ce22e6fef5f7b5dd94d15
Fixes: 6ba8a3b19e76 ("tcp: Tail loss probe (TLP)")
Signed-off-by: Neal Cardwell &lt;ncardwell@google.com&gt;
Signed-off-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: Nandita Dukkipati &lt;nanditad@google.com&gt;
Acked-by: Eric Dumazet &lt;edumazet@google.com&gt;
Signed-off-by: Willy Tarreau &lt;w@1wt.eu&gt;
</content>
</entry>
<entry>
<title>tcp: initialize icsk_ack.lrcvtime at session start time</title>
<updated>2017-07-04T10:11:09+00:00</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2017-03-22T15:10:21+00:00</published>
<link rel='alternate' type='text/html' href='https://gitea.privatedns.org/xavi/android_kernel_m2note/commit/?id=a4ec3655df97c0b470f3f659c57369d985c8ca4d'/>
<id>urn:sha1:a4ec3655df97c0b470f3f659c57369d985c8ca4d</id>
<content type='text'>
commit 15bb7745e94a665caf42bfaabf0ce062845b533b upstream.

icsk_ack.lrcvtime has a 0 value at socket creation time.

tcpi_last_data_recv can have bogus value if no payload is ever received.

This patch initializes icsk_ack.lrcvtime for active sessions
in tcp_finish_connect(), and for passive sessions in
tcp_create_openreq_child()

Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Acked-by: Neal Cardwell &lt;ncardwell@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Willy Tarreau &lt;w@1wt.eu&gt;
</content>
</entry>
</feed>
