19
Although these mechanisms are protective, the systemic response to surgery can lead to SIRS (Figure 3 - Appendix 2) [62], [91], [156].
The stress response to surgery and THNCT is characterized by increased secretion of stress hormones (adrenaline and cortisol, …) [91], [108]. This response is more severe in neonates than in older patients [108].
Stress hormones may be involved in the pathogenesis of postoperative infection and organ dysfunction. Hypothalamic-pituitary-adrenal axis activation is required to respond to severe stress, but excessive cortisol may delay wound healing by promoting catabolism, inhibiting MD, and infection [107].
In addition, sex hormones also have important interactions with the MD system and modulate inflammatory responses [78], [155]. However, clinical trials on the role of sex hormones have shown conflicting results [78].
1.2.3.2. Stages of progression in the inflammatory response
Regarding the pathogenesis of SIRS, based on Bone's proposal (3-stage progression model), the inflammatory response is divided into 4 stages (Table 1.1) [78].
Table 1.1. Phases of the inflammatory response
Stage
Characteristic | Expression | |
1 | Local inflammatory response | Swelling, heat, redness, pain |
2 | Acute phase inflammatory response (inflammatory mediators) into the blood) | - Test : inflammatory response can be detected by testing - Clinical: “SIRS criteria” |
3 | Severe systemic inflammatory syndrome (SIRS) | Severe homeostasis disorders |
4 | Excessive systemic response Systemic MD inhibition | MODS |
Maybe you are interested!
-
Continue to Focus on Leading, Directing, and Creating Favorable Conditions for Capable Investors to Implement Large-Scale Tourism Infrastructure Projects -
Qos Assurance Methods for Multimedia Communications
zt2i3t4l5ee
zt2a3gs
zt2a3ge
zc2o3n4t5e6n7ts
low. The EF PHB requires a sufficiently large number of output ports to provide low delay, low loss, and low jitter.
EF PHBs can be implemented if the output port's bandwidth is sufficiently large, combined with small buffer sizes and other network resources dedicated to EF packets, to allow the router's service rate for EF packets on an output port to exceed the arrival rate λ of packets at that port.
This means that packets with PHB EF are considered with a pre-allocated amount of output bandwidth and a priority that ensures minimum loss, minimum delay and minimum jitter before being put into operation.
PHB EF is suitable for channel simulation, leased line simulation, and real-time services such as voice, video without compromising on high loss, delay and jitter values.
Figure 2.10 Example of EF installation
Figure 2.10 shows an example of an EF PHB implementation. This is a simple priority queue scheduling technique. At the edges of the DS domain, EF packet traffic is prioritized according to the values agreed upon by the SLA. The EF queue in the figure needs to output packets at a rate higher than the packet arrival rate λ. To provide an EF PHB over an end-to-end DS domain, bandwidth at the output ports of the core routers needs to be allocated in advance to ensure the requirement μ > λ. This can be done by a pre-configured provisioning process. In the figure, EF packets are placed in the priority queue (the upper queue). With such a length, the queue can operate with μ > λ.
Since EF was primarily used for real-time services such as voice and video, and since real-time services use UDP instead of TCP, RED is generally
not suitable for EF queues because applications using UDP will not respond to random packet drop and RED will strip unnecessary packets.
2.2.4.2 Assured Forwarding (AF) PHB
PHB AF is defined by RFC 2597. The purpose of PHB AF is to deliver packets reliably and therefore delay and jitter are considered less important than packet loss. PHB AF is suitable for non-real-time services such as applications using TCP. PHB AF first defines four classes: AF1, AF2, AF3, AF4. For each of these AF classes, packets are then classified into three subclasses with three distinct priority levels.
Table 2.8 shows the four AF classes and 12 AF subclasses and the DSCP values for the 12 AF subclasses defined by RFC 2597. RFC 2597 also allows for more than three separate priority levels to be added for internal use. However, these separate priority levels will only have internal significance.
PHB Class
PHB Subclass
Package type
DSCP
AF4
AF41
Short
100010
AF42
Medium
100100
AF43
High
100110
AF3
AF31
Short
011010
AF32
Medium
011100
AF33
High
011110
AF2
AF21
Short
010010
AF22
Medium
010100
AF23
High
010110
AF1
AF11
Short
001010
AF12
Medium
001100
AF13
High
001110
Table 2.8 AF DSCPs
The AF PHB ensures that packets are forwarded with a high probability of delivery to the destination within the bounds of the rate agreed upon in an SLA. If AF traffic at an ingress port exceeds the pre-priority rate, which is considered non-compliant or “out of profile”, the excess packets will not be delivered to the destination with the same probability as the packets belonging to the defined traffic or “in profile” packets. When there is network congestion, the out of profile packets are dropped before the in profile packets are dropped.
When service levels are defined using AF classes, different quantity and quality between AF classes can be realized by allocating different amounts of bandwidth and buffer space to the four AF classes. Unlike
EF, most AF traffic is non-real-time traffic using TCP, and the RED queue management strategy is an AQM (Adaptive Queue Management) strategy suitable for use in AF PHBs. The four AF PHB layers can be implemented as four separate queues. The output port bandwidth is divided into four AF queues. For each AF queue, packets are marked with three “colors” corresponding to three separate priority levels.
In addition to the 32 DSCP 1 groups defined in Table 2.8, 21 DSCPs have been standardized as follows: one for PHB EF, 12 for PHB AF, and 8 for CSCP. There are 11 DSCP 1 groups still available for other standards.
2.2.5.Example of Differentiated Services
We will look at an example of the Differentiated Service model and mechanism of operation. The architecture of Differentiated Service consists of two basic sets of functions:
Edge functions: include packet classification and traffic conditioning. At the inbound edge of the network, incoming packets are marked. In particular, the DS field in the packet header is set to a certain value. For example, in Figure 2.12, packets sent from H1 to H3 are marked at R1, while packets from H2 to H4 are marked at R2. The labels on the received packets identify the service class to which they belong. Different traffic classes receive different services in the core network. The RFC definition uses the term behavior aggregate rather than the term traffic class. After being marked, a packet can be forwarded immediately into the network, delayed for a period of time before being forwarded, or dropped. We will see that there are many factors that affect how a packet is marked, and whether it is forwarded immediately, delayed, or dropped.
Figure 2.12 DiffServ Example
Core functionality: When a DS-marked packet arrives at a Diffservcapable router, the packet is forwarded to the next router based on
Per-hop behavior is associated with packet classes. Per-hop behavior affects router buffers and the bandwidth shared between competing classes. An important principle of the Differentiated Service architecture is that a router's per-hop behavior is based only on the packet's marking or the class to which it belongs. Therefore, if packets sent from H1 to H3 as shown in the figure receive the same marking as packets from H2 to H4, then the network routers treat the packets exactly the same, regardless of whether the packet originated from H1 or H2. For example, R3 does not distinguish between packets from h1 and H2 when forwarding packets to R4. Therefore, the Differentiated Service architecture avoids the need to maintain router state about separate source-destination pairs, which is important for network scalability.
Chapter Conclusion
Chapter 2 has presented and clarified two main models of deploying and installing quality of service in IP networks. While the traditional best-effort model has many disadvantages, later models such as IntServ and DiffServ have partly solved the problems that best-effort could not solve. IntServ follows the direction of ensuring quality of service for each separate flow, it is built similar to the circuit switching model with the use of the RSVP resource reservation protocol. IntSer is suitable for services that require fixed bandwidth that is not shared such as VoIP services, multicast TV services. However, IntSer has disadvantages such as using a lot of network resources, low scalability and lack of flexibility. DiffServ was born with the idea of solving the disadvantages of the IntServ model.
DiffServ follows the direction of ensuring quality based on the principle of hop-by-hop behavior based on the priority of marked packets. The policy for different types of traffic is decided by the administrator and can be changed according to reality, so it is very flexible. DiffServ makes better use of network resources, avoiding idle bandwidth and processing capacity on routers. In addition, the DifServ model can be deployed on many independent domains, so the ability to expand the network becomes easy.
Chapter 3: METHODS TO ENSURE QoS FOR MULTIMEDIA COMMUNICATIONS
In packet-switched networks, different packet flows often have to share the transmission medium all the way to the destination station. To ensure the fair and efficient allocation of bandwidth to flows, appropriate serving mechanisms are required at network nodes, especially at gateways or routers, where many different data flows often pass through. The scheduler is responsible for serving packets of the selected flow and deciding which packet will be served next. Here, a flow is understood as a set of packets belonging to the same priority class, or originating from the same source, or having the same source and destination addresses, etc.
In normal state when there is no congestion, packets will be sent as soon as they are delivered. In case of congestion, if QoS assurance methods are not applied, prolonged congestion can cause packet drops, affecting service quality. In some cases, congestion is prolonged and widespread in the network, which can easily lead to the network being "frozen", or many packets being dropped, seriously affecting service quality.
Therefore, in this chapter, in sections 3.2 and 3.3, we introduce some typical network traffic load monitoring techniques to predict and prevent congestion before it occurs through the measure of dropping (removing) packets early when there are signs of impending congestion.
3.1. DropTail method
DropTail is a simple, traditional queue management method based on FIFO mechanism. All incoming packets are placed in the queue, when the queue is full, the later packets are dropped.
Due to its simplicity and ease of implementation, DropTail has been used for many years on Internet router systems. However, this algorithm has the following disadvantages:
− Cannot avoid the phenomenon of “Lock out”: Occurs when 1 or several traffic streams monopolize the queue, making packets of other connections unable to pass through the router. This phenomenon greatly affects reliable transmission protocols such as TCP. According to the anti-congestion algorithm, when locked out, the TCP connection stream will reduce the window size and reduce the packet transmission speed exponentially.
− Can cause Global Synchronization: This is the result of a severe “Lock out” phenomenon. Some neighboring routers have their queues monopolized by a number of connections, causing a series of other TCP connections to be unable to pass through and simultaneously reducing the transmission speed. After those monopolized connections are temporarily suspended,
Once the queue is cleared, it takes a considerable amount of time for TCP connections to return to their original speed.
− Full Queue phenomenon: Data transmitted on the Internet often has an explosion, packets arriving at the router are often in clusters rather than in turn. Therefore, the operating mechanism of DropTail makes the queue easily full for a long period of time, leading to the average delay time of large packets. To avoid this phenomenon, with DropTail, the only way is to increase the router's buffer, this method is very expensive and ineffective.
− No QoS guarantee: With the DropTail mechanism, there is no way to prioritize important packets to be transmitted through the router earlier when all are in the queue. Meanwhile, with multimedia communication, ensuring connection and stable speed is extremely important and the DropTail algorithm cannot satisfy.
The problem of choosing the buffer size of the routers in the network is to “absorb” short bursts of traffic without causing too much queuing delay. This is necessary in bursty data transmission. The queue size determines the size of the packet bursts (traffic spikes) that we want to be able to transmit without being dropped at the routers.
In IP-based application networks, packet dropping is an important mechanism for indirectly reporting congestion to end stations. A solution that prevents router queues from filling up while reducing the packet drop rate is called dynamic queue management.
3.2. Random elimination method – RED
3.2.1 Overview
RED (Random Early Detection of congestion; Random Early Drop) is one of the first AQM algorithms proposed in 1993 by Sally Floyd and Van Jacobson, two scientists at the Lawrence Berkeley Laboratory of the University of California, USA. Due to its outstanding advantages compared to previous queue management algorithms, RED has been widely installed and deployed on the Internet.
The most fundamental point of their work is that the most effective place to detect congestion and react to it is at the gateway or router.
Source entities (senders) can also do this by estimating end-to-end delay, throughput variability, or the rate of packet retransmissions due to drop. However, the sender and receiver view of a particular connection cannot tell which gateways on the network are congested, and cannot distinguish between propagation delay and queuing delay. Only the gateway has a true view of the state of the queue, the link share of the connections passing through it at any given time, and the quality of service requirements of the
traffic flows. The RED gateway monitors the average queue length, which detects early signs of impending congestion (average queue length exceeding a predetermined threshold) and reacts appropriately in one of two ways:
− Drop incoming packets with a certain probability, to indirectly inform the source of congestion, the source needs to reduce the transmission rate to keep the queue from filling up, maintaining the ability to absorb incoming traffic spikes.
− Mark “congestion” with a certain probability in the ECN field in the header of TCP packets to notify the source (the receiving entity will copy this bit into the acknowledgement packet).
Figure 3. 1 RED algorithm
The main goal of RED is to avoid congestion by keeping the average queue size within a sufficiently small and stable region, which also means keeping the queuing delay sufficiently small and stable. Achieving this goal also helps: avoid global synchronization, not resist bursty traffic flows (i.e. flows with low average throughput but high volatility), and maintain an upper bound on the average queue size even in the absence of cooperation from transport layer protocols.
To achieve the above goals, RED gateways must do the following:
− The first is to detect congestion early and react appropriately to keep the average queue size small enough to keep the network operating in the low latency, high throughput region, while still allowing the queue size to fluctuate within a certain range to absorb short-term fluctuations. As discussed above, the gateway is the most appropriate place to detect congestion and is also the most appropriate place to decide which specific connection to report congestion to.
− The second thing is to notify the source of congestion. This is done by marking and notifying the source to reduce traffic. Normally the RED gateway will randomly drop packets. However, if congestion
If congestion is detected before the queue is full, it should be combined with packet marking to signal congestion. The RED gateway has two options: drop or mark; where marking is done by marking the ECN field of the packet with a certain probability, to signal the source to reduce the traffic entering the network.
− An important goal that RED gateways need to achieve is to avoid global synchronization and not to resist traffic flows that have a sudden characteristic. Global synchronization occurs when all connections simultaneously reduce their transmission window size, leading to a severe drop in throughput at the same time. On the other hand, Drop Tail or Random Drop strategies are very sensitive to sudden flows; that is, the gateway queue will often overflow when packets from these flows arrive. To avoid these two phenomena, gateways can use special algorithms to detect congestion and decide which connections will be notified of congestion at the gateway. The RED gateway randomly selects incoming packets to mark; with this method, the probability of marking a packet from a particular connection is proportional to the connection's shared bandwidth at the gateway.
− Another goal is to control the average queue size even without cooperation from the source entities. This can be done by dropping packets when the average size exceeds an upper threshold (instead of marking it). This approach is necessary in cases where most connections have transmission times that are less than the round-trip time, or where the source entities are not able to reduce traffic in response to marking or dropping packets (such as UDP flows).
3.2.2 Algorithm
This section describes the algorithm for RED gateways. RED gateways calculate the average queue size using a low-pass filter. This average queue size is compared with two thresholds: minth and maxth. When the average queue size is less than the lower threshold, no incoming packets are marked or dropped; when the average queue size is greater than the upper threshold, all incoming packets are dropped. When the average queue size is between minth and maxth, each incoming packet is marked or dropped with a probability pa, where pa is a function of the average queue size avg; the probability of marking or dropping a packet for a particular connection is proportional to the bandwidth share of that connection at the gateway. The general algorithm for a RED gateway is described as follows: [5]
For each packet arrival
Caculate the average queue size avg If minth ≤ avg < maxth
div.maincontent .s1 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 15pt; }
div.maincontent .s2 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .p { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent p { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent .s3 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s4 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s5 { color: black; font-family:"Times New Roman", serif; font-style: italic; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s6 { color: black; font-family:"Times New Roman", serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s7 { color: black; font-family:Wingdings; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s8 { color: black; font-family:Arial, sans-serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .s9 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s10 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 9pt; vertical-align: 6pt; }
div.maincontent .s11 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 13pt; }
div.maincontent .s12 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 10pt; }
div.maincontent .s13 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-d -
Comparison of Geographical Conditions, Structure of Culture and Tourism Activities -
Basic Conditions for Implementing Solutions -
General Comments and Assessment of Basic Conditions of the Study Area

* Source: according to Faist E. (2008) [ 78]
1.2.3.3. Systemic inflammatory response syndrome
The official definition of SIRS is a systemic inflammatory response to multiple severe clinical events manifested by at least 2 of 4 criteria:
1) Hyperthermia or hypothermia, 2) Tachycardia, 3) Rapid breathing or hyperventilation,
4) Increased or decreased white blood cells ( Table 1.2) [47], [145].
Table 1.2. Unified definition of clinical conditions leading to organ failure
1. Infection : inflammatory response to the presence of microorganisms or microbial invasion of host tissue (usually sterile) .
2. Bacteremia : presence of living bacteria in the blood.
3. SIRS: when there are at least 2 of the following symptoms:
- Body temperature > 38 0 C or < 36 0 C
- Pulse > 90 beats/minute
- Respiratory rate > 20 times/minute or PaCO 2 < 32 mmHg
- BC > 12 x 10 9 /l, < 4 x 10 9 /l or > 10% of immature forms.
4. Sepsis : SIRS caused by infection (suspected or confirmed) .
5. Severe sepsis : sepsis accompanied by at least 1 organ failure or reduced blood perfusion .
6. Septic shock : severe infection accompanied by hypotension that does not respond to adequate fluid replacement .
7. MODS: damage to ≥ two organ systems in acutely ill patients in whom homeostasis cannot be maintained without therapeutic intervention.
* Source: according to Bone RC (1992) [ 47], Goldstein B. (2005) [ 87], Levy MM (2001) [ 116] and Robertson CM (2006) [ 145]
The concept of SIRS is widely accepted by clinicians and researchers [116]. The term “MODS” was replaced by “multiple organ failure” to emphasize the reversibility and dynamic nature of this syndrome [69].
For simplicity, SIRS is divided into two types: non-infectious systemic inflammatory response such as burns, trauma, surgery, pancreatitis, etc. called SIRS and infectious systemic inflammatory response called NKH [55], [145]. SIRS and NKH have many common features. Both initiate the production of similar inflammatory mediators leading to MODS and eventually death [55].
In most cases, this inflammatory response consists of two phases: the first phase is
Early MODS
Normal
INFECTION
Late MODS
Pro-inflammatory
The predominant pro-inflammatory phase (in the first 36 hours) is regulated by the innate immune system with SIRS expression. During the next few days, the predominant anti-inflammatory phase is regulated by the adaptive immune system with MD suppression expression making the patient susceptible to infection (Figure 1.7) [82], [91], [107], [199].
Anti-inflammatory
Figure 1.7. Progression of inflammatory response
The inflammatory response consists of two phases: the first phase is characterized by a proinflammatory response (SIRS) dominated by Th2 and the two cytokines TNF-α and IL-6, which can lead to early MODS. The second phase is characterized by a anti-inflammatory response (CARS) dominated by Th2 and the cytokines IL4, IL-10, and TGF-β. The anti-inflammatory phase is characterized by inhibition of MD and can also lead to late MODS.
* Source: according to Ravat F. (2011) [ 199]
Both SIRS and infection following compensatory anti-inflammatory response syndrome (CARS) can lead to MODS. Moore (1995) described early and late MODS models (Figure 1.8) depending on the severity of the initial insult and is increasingly supported by many authors [107], [176], [199].
It is estimated that approximately one-third of inpatients, over 50% of patients in intensive care units, and over 80% of patients in outpatient units meet the criteria for SIRS [145]. Depending on the disease group, the mortality rate can be as high as 90% [118]. Although caused by many different pathologies, the mechanisms causing SIRS are similar. SIRS represents the body's response to a stimulating event rather than a direct consequence of that event [145].
Inflammatory response
MODS
“One-hit” model
SIRS
(mediated by natural MD)
Severe inflammation
“Two-hit” model Additional surgical stress Ischemia/reperfusion injury
Infection
Moderate inflammation
Time after injury
Moderate MD inhibition
Surgical injury
surgery/trauma
Severe MD inhibition
CARS
(mediated by adaptive MD)
Anti-inflammatory response
Figure 1.8. Injury models for SIRS, CARS and MODS
An initial severe injury may induce a vigorous early proinflammatory response and severe SIRS (a “one-hit” model) results in early MODS. In the “two-hit” scenario, patients with initial moderate SIRS eventually develop late MODS because of reactivation of the inflammatory response due to reoperation, I/R injury, or infection. Patients who survive early SIRS due to severe injury may develop CARS associated with postoperative infection.
* Source: according to Kimura F. (2010) [ 107]
1.3. IMMUNE AND CYTOKINE CHANGES IN INFLAMMATORY RESPONSE
1.3.1. The role of immunity in the inflammatory response
Surgical and traumatic injuries severely affect the innate and adaptive MD responses, both pro-inflammatory and anti-inflammatory [91], [107]. Any cause of stress or tissue damage is a danger signal to the MD system [55]. The danger signals are called DAMPs (danger-associated molecular patterns) including PAMP (pathogen-associated molecular-pattern) and DAMP (damage-associated molecular- pattern) or alarnin. The innate and adaptive MD responses are initiated and regulated by DAMPs via receptors that recognize these molecules (Figure 4 - Appendix
2) [55], [107]. Inflammation is the first response to DAMPs and activation of MD cells is a prerequisite for the initiation of the inflammatory response [ 55].
Many complications after cardiac surgery are associated with MD dysregulation [143]. The inflammatory response peaks on the second day and lasts for about 1 week after surgery. However, the MD response is more prolonged [139], [187], [192].
1.3.2. Immunosuppression in systemic inflammatory response syndrome
Medical literature has mentioned many MD inhibition states in SIRS and NKH. MD function is inhibited due to decreased cytokine production, increased apoptosis of lymphocytes, Th1/Th2 imbalance, appearance of regulatory T lymphocytes [55], [63], [107]. MD inhibition state due to macrophages increasing production of prostaglandin E2 (MD inhibitory factor), decreased production of IL-12, decreased antigen presentation due to decreased MHC class II, ... makes patients susceptible to infection [63], [107]. Damage to antigen presentation function lasts up to 7 days depending on the degree of damage [78]. Moreover, anti-inflammatory response tendency can be genetic, making it difficult to predict inflammatory response [103].
1.3.3. Cytokines in systemic inflammatory response
1.3.3.1. General information about cytokines
- Definition: cytokines are low molecular weight proteins that act as regulatory mediators between cells in the body, produced temporarily and locally by many types of cells but mainly immune cells during inflammatory activation, … [78], [ 102].
- Effects: cytokines have rapid, short-term effects and have many different effects (pleiotropic) but mainly aim at inflammatory responses to sites of infection, injury and facilitate wound healing [78], [100], [102]. However, cytokine secretion varies greatly between individuals [129], [170 ].
- Classification: In general, inflammatory cytokines are classified into two types:
+ Pro-inflammatory cytokines: TNF-α, IL-1, IL-6 and IL-8, …
+ Anti-inflammatory cytokines: IL-1 receptor antagonists, IL-10, IL-13, … [78], [102], [111].
- Cytokine testing
Historically, the three main methods of cytokine testing include:
24
+ Molecular assays: quantification of cytokine mRNA, ...
+ Immunoassays: quantification of cytokine proteins. ELISA is the most common test and is considered the standard method of cytokine quantification, widely used in clinical and research settings.
+ Bioassays: measure the actual biological activity of cytokines in target cells; recently, single-cell assays have been used.
Cytokine quantification is difficult because cytokines are usually secreted in lymphoid compartments or tissues rather than in blood. However, the clinical sample commonly used for cytokine quantification is blood [121], [194].
At the same time, the quantification of cytokines in plasma is also difficult due to the short half-life. For example, IL-6 exists in many forms such as monomers, dimers, multimers or has anti-IL-6 autoantibodies. Therefore, many different methods may have obstacles in quantifying IL-6 [100].
1.3.3.2. Inflammatory cytokines
Inflammatory cytokines are important factors in the development of SIRS [49], [100], [171]. The immediate postoperative balance between pro-inflammatory and anti-inflammatory responses is a determinant of the clinical course [43]. However, an excessive increase in pro-inflammatory cytokines (cytokine storm) or a decrease in anti-inflammatory response will lead to SIRS, and conversely, a predominance of anti-inflammatory cytokines will lead to CARS, which will suppress the patient's immune system and increase the risk of infection. Both SIRS and infection after CARS progression can lead to MODS [91], [171], [179]. Patients with high expression of specific cytokines are almost certain to develop severe postoperative complications [113].
Cytokine response to extracorporeal circulation
According to Landis (2009), the cytokine response to THNCT includes two distinct phases:
1) The proinflammatory phase is caused by blood contact with artificial surfaces.
2) The anti-inflammatory phase is generated by the body (homeostatic response) [111].
Proinflammatory cytokines (TNF-α, IL-1, IL-2, IL-6, IL-8) are secreted early at the start of THNCT (5 min to 2 h) [111]. TNF-α stimulates many cells
25
Cells such as cardiomyocytes, macrophages, endothelial cells, etc. secrete IL-6. Next, IL-6 stimulates the liver to synthesize acute-phase proteins. IL-6 increases further at the end of surgery, peaks in the blood from the 2nd to the 6th hour after aortic clamping, and remains high up to 24 hours after surgery [81], [89], [195]. At the same time, anti-inflammatory cytokines including IL-1 receptor antagonist (IL-1ra), IL-10, and soluble TNF receptor are also typically released from 1
- 2 hours to 24 hours after THNCT [89], [111].
TNF-α and IL-1 are early inflammatory cytokines that initiate the inflammatory response and cause fever [108]. TNF-α is often elevated after THNCT and remains elevated for up to 24 hours after THNCT. TNF-α and IL-6 are associated with reduced left ventricular contractility. IL-6 levels correlate with the severity of the inflammatory response to THNCT. In addition, polymorphisms in these inflammatory mediator genes (common in the population) make some patients more susceptible to postoperative SIRS [81], [126], [195].
1.3.3.3. Interleukin-6 and Interleukin-10
The two most studied important cytokines representing the two inflammatory phases are IL-6 and IL-10 [108], [183]. Jouan (2012) demonstrated that genetic testing focusing on the IL6-G572C and IL10-C592A single nucleotide polymorphisms could be a tool to identify patients at highest risk of poor tolerance to the inflammatory response after THNCT and to apply strategies to alleviate this response [103]. On this basis, Denizot (2012) suggested that IL-6 and IL-10 are now the master predictive control mediators of the inflammatory response after THNCT [70].
- Interleukin-6
IL-6 cytokines are produced mainly by monocytes, macrophages, and endothelial cells in response to tissue injury and inflammatory stimuli [89], [100], [199]. IL-6 production is influenced by the degree of surgical trauma and tissue damage as well as the outcome of THNCT [75]. Several studies have demonstrated that increased IL-6 is associated with a proinflammatory response after surgery.
26
cardiac surgery, is an early predictor of mortality in cardiac surgery and cardiac surgery transfusion increases IL-6. Higher IL-6 levels are associated with complications after cardiac surgery [43]. IL-6 responses are often divided into pro-inflammatory and anti-inflammatory types [89], [100].
+ Pro-inflammatory activity: IL-6 is a very sensitive marker for the degree of tissue damage and plays a role in stimulating the liver to secrete acute phase proteins. IL-6 has many effects on the immune system: activating endothelial cells leading to the attraction of WBCs to the site of inflammation, activating B cells responsible for the humoral immune response,
… [78], [89], [171], [199]. IL-6 activates blood coagulation and stimulates platelet production [100], [102]. IL-6 induces hypothermia and fever through its actions on the hypothalamus. Thus, IL-6 is a major “actor” of the acute and early adaptive phases of the inflammatory response although IL-6 also has anti-inflammatory properties [89].
+ Anti-inflammatory activity: IL-6 inhibits some acute-phase response proteases, reduces TNF and IL-1 synthesis, releases GCs that inhibit MD and promotes the release of IL-1ra and soluble TNF receptors [ 78].
+ Some other activities: IL-6 binds to the hypothalamus and causes fever. IL-6 is a “strong” stimulant of the hypothalamic-pituitary-adrenal axis, causing the release of cortisol. Cortisol inhibits the production of IL-6. IL-6 causes hyperglycemia. In contrast, hyperglycemia increases IL-6 levels by stimulating monocytes to produce IL-6. In addition, IL-6 also inhibits insulin signaling in liver cells and may play a role in insulin resistance in many diseases including infections [ 100].
IL-6 increases after trauma, surgery, ... and can be detected early after about 70 minutes. In the first 24 hours, IL-6 can reach its highest level and then gradually return to normal levels [78], [100]. IL-6 is present in the blood for up to 10 days after injury, so it is often used as the first measure of inflammatory activation [33], [113]. High IL-6 levels reflect the risk of death due to organ failure after trauma, hemorrhage, infection, ... [100]. At the same time, high IL-6 is significant in patients with severe acute renal failure after heart surgery.


![Qos Assurance Methods for Multimedia Communications
zt2i3t4l5ee
zt2a3gs
zt2a3ge
zc2o3n4t5e6n7ts
low. The EF PHB requires a sufficiently large number of output ports to provide low delay, low loss, and low jitter.
EF PHBs can be implemented if the output ports bandwidth is sufficiently large, combined with small buffer sizes and other network resources dedicated to EF packets, to allow the routers service rate for EF packets on an output port to exceed the arrival rate λ of packets at that port.
This means that packets with PHB EF are considered with a pre-allocated amount of output bandwidth and a priority that ensures minimum loss, minimum delay and minimum jitter before being put into operation.
PHB EF is suitable for channel simulation, leased line simulation, and real-time services such as voice, video without compromising on high loss, delay and jitter values.
Figure 2.10 Example of EF installation
Figure 2.10 shows an example of an EF PHB implementation. This is a simple priority queue scheduling technique. At the edges of the DS domain, EF packet traffic is prioritized according to the values agreed upon by the SLA. The EF queue in the figure needs to output packets at a rate higher than the packet arrival rate λ. To provide an EF PHB over an end-to-end DS domain, bandwidth at the output ports of the core routers needs to be allocated in advance to ensure the requirement μ > λ. This can be done by a pre-configured provisioning process. In the figure, EF packets are placed in the priority queue (the upper queue). With such a length, the queue can operate with μ > λ.
Since EF was primarily used for real-time services such as voice and video, and since real-time services use UDP instead of TCP, RED is generally
not suitable for EF queues because applications using UDP will not respond to random packet drop and RED will strip unnecessary packets.
2.2.4.2 Assured Forwarding (AF) PHB
PHB AF is defined by RFC 2597. The purpose of PHB AF is to deliver packets reliably and therefore delay and jitter are considered less important than packet loss. PHB AF is suitable for non-real-time services such as applications using TCP. PHB AF first defines four classes: AF1, AF2, AF3, AF4. For each of these AF classes, packets are then classified into three subclasses with three distinct priority levels.
Table 2.8 shows the four AF classes and 12 AF subclasses and the DSCP values for the 12 AF subclasses defined by RFC 2597. RFC 2597 also allows for more than three separate priority levels to be added for internal use. However, these separate priority levels will only have internal significance.
PHB Class
PHB Subclass
Package type
DSCP
AF4
AF41
Short
100010
AF42
Medium
100100
AF43
High
100110
AF3
AF31
Short
011010
AF32
Medium
011100
AF33
High
011110
AF2
AF21
Short
010010
AF22
Medium
010100
AF23
High
010110
AF1
AF11
Short
001010
AF12
Medium
001100
AF13
High
001110
Table 2.8 AF DSCPs
The AF PHB ensures that packets are forwarded with a high probability of delivery to the destination within the bounds of the rate agreed upon in an SLA. If AF traffic at an ingress port exceeds the pre-priority rate, which is considered non-compliant or “out of profile”, the excess packets will not be delivered to the destination with the same probability as the packets belonging to the defined traffic or “in profile” packets. When there is network congestion, the out of profile packets are dropped before the in profile packets are dropped.
When service levels are defined using AF classes, different quantity and quality between AF classes can be realized by allocating different amounts of bandwidth and buffer space to the four AF classes. Unlike
EF, most AF traffic is non-real-time traffic using TCP, and the RED queue management strategy is an AQM (Adaptive Queue Management) strategy suitable for use in AF PHBs. The four AF PHB layers can be implemented as four separate queues. The output port bandwidth is divided into four AF queues. For each AF queue, packets are marked with three “colors” corresponding to three separate priority levels.
In addition to the 32 DSCP 1 groups defined in Table 2.8, 21 DSCPs have been standardized as follows: one for PHB EF, 12 for PHB AF, and 8 for CSCP. There are 11 DSCP 1 groups still available for other standards.
2.2.5.Example of Differentiated Services
We will look at an example of the Differentiated Service model and mechanism of operation. The architecture of Differentiated Service consists of two basic sets of functions:
Edge functions: include packet classification and traffic conditioning. At the inbound edge of the network, incoming packets are marked. In particular, the DS field in the packet header is set to a certain value. For example, in Figure 2.12, packets sent from H1 to H3 are marked at R1, while packets from H2 to H4 are marked at R2. The labels on the received packets identify the service class to which they belong. Different traffic classes receive different services in the core network. The RFC definition uses the term behavior aggregate rather than the term traffic class. After being marked, a packet can be forwarded immediately into the network, delayed for a period of time before being forwarded, or dropped. We will see that there are many factors that affect how a packet is marked, and whether it is forwarded immediately, delayed, or dropped.
Figure 2.12 DiffServ Example
Core functionality: When a DS-marked packet arrives at a Diffservcapable router, the packet is forwarded to the next router based on
Per-hop behavior is associated with packet classes. Per-hop behavior affects router buffers and the bandwidth shared between competing classes. An important principle of the Differentiated Service architecture is that a routers per-hop behavior is based only on the packets marking or the class to which it belongs. Therefore, if packets sent from H1 to H3 as shown in the figure receive the same marking as packets from H2 to H4, then the network routers treat the packets exactly the same, regardless of whether the packet originated from H1 or H2. For example, R3 does not distinguish between packets from h1 and H2 when forwarding packets to R4. Therefore, the Differentiated Service architecture avoids the need to maintain router state about separate source-destination pairs, which is important for network scalability.
Chapter Conclusion
Chapter 2 has presented and clarified two main models of deploying and installing quality of service in IP networks. While the traditional best-effort model has many disadvantages, later models such as IntServ and DiffServ have partly solved the problems that best-effort could not solve. IntServ follows the direction of ensuring quality of service for each separate flow, it is built similar to the circuit switching model with the use of the RSVP resource reservation protocol. IntSer is suitable for services that require fixed bandwidth that is not shared such as VoIP services, multicast TV services. However, IntSer has disadvantages such as using a lot of network resources, low scalability and lack of flexibility. DiffServ was born with the idea of solving the disadvantages of the IntServ model.
DiffServ follows the direction of ensuring quality based on the principle of hop-by-hop behavior based on the priority of marked packets. The policy for different types of traffic is decided by the administrator and can be changed according to reality, so it is very flexible. DiffServ makes better use of network resources, avoiding idle bandwidth and processing capacity on routers. In addition, the DifServ model can be deployed on many independent domains, so the ability to expand the network becomes easy.
Chapter 3: METHODS TO ENSURE QoS FOR MULTIMEDIA COMMUNICATIONS
In packet-switched networks, different packet flows often have to share the transmission medium all the way to the destination station. To ensure the fair and efficient allocation of bandwidth to flows, appropriate serving mechanisms are required at network nodes, especially at gateways or routers, where many different data flows often pass through. The scheduler is responsible for serving packets of the selected flow and deciding which packet will be served next. Here, a flow is understood as a set of packets belonging to the same priority class, or originating from the same source, or having the same source and destination addresses, etc.
In normal state when there is no congestion, packets will be sent as soon as they are delivered. In case of congestion, if QoS assurance methods are not applied, prolonged congestion can cause packet drops, affecting service quality. In some cases, congestion is prolonged and widespread in the network, which can easily lead to the network being frozen, or many packets being dropped, seriously affecting service quality.
Therefore, in this chapter, in sections 3.2 and 3.3, we introduce some typical network traffic load monitoring techniques to predict and prevent congestion before it occurs through the measure of dropping (removing) packets early when there are signs of impending congestion.
3.1. DropTail method
DropTail is a simple, traditional queue management method based on FIFO mechanism. All incoming packets are placed in the queue, when the queue is full, the later packets are dropped.
Due to its simplicity and ease of implementation, DropTail has been used for many years on Internet router systems. However, this algorithm has the following disadvantages:
− Cannot avoid the phenomenon of “Lock out”: Occurs when 1 or several traffic streams monopolize the queue, making packets of other connections unable to pass through the router. This phenomenon greatly affects reliable transmission protocols such as TCP. According to the anti-congestion algorithm, when locked out, the TCP connection stream will reduce the window size and reduce the packet transmission speed exponentially.
− Can cause Global Synchronization: This is the result of a severe “Lock out” phenomenon. Some neighboring routers have their queues monopolized by a number of connections, causing a series of other TCP connections to be unable to pass through and simultaneously reducing the transmission speed. After those monopolized connections are temporarily suspended,
Once the queue is cleared, it takes a considerable amount of time for TCP connections to return to their original speed.
− Full Queue phenomenon: Data transmitted on the Internet often has an explosion, packets arriving at the router are often in clusters rather than in turn. Therefore, the operating mechanism of DropTail makes the queue easily full for a long period of time, leading to the average delay time of large packets. To avoid this phenomenon, with DropTail, the only way is to increase the routers buffer, this method is very expensive and ineffective.
− No QoS guarantee: With the DropTail mechanism, there is no way to prioritize important packets to be transmitted through the router earlier when all are in the queue. Meanwhile, with multimedia communication, ensuring connection and stable speed is extremely important and the DropTail algorithm cannot satisfy.
The problem of choosing the buffer size of the routers in the network is to “absorb” short bursts of traffic without causing too much queuing delay. This is necessary in bursty data transmission. The queue size determines the size of the packet bursts (traffic spikes) that we want to be able to transmit without being dropped at the routers.
In IP-based application networks, packet dropping is an important mechanism for indirectly reporting congestion to end stations. A solution that prevents router queues from filling up while reducing the packet drop rate is called dynamic queue management.
3.2. Random elimination method – RED
3.2.1 Overview
RED (Random Early Detection of congestion; Random Early Drop) is one of the first AQM algorithms proposed in 1993 by Sally Floyd and Van Jacobson, two scientists at the Lawrence Berkeley Laboratory of the University of California, USA. Due to its outstanding advantages compared to previous queue management algorithms, RED has been widely installed and deployed on the Internet.
The most fundamental point of their work is that the most effective place to detect congestion and react to it is at the gateway or router.
Source entities (senders) can also do this by estimating end-to-end delay, throughput variability, or the rate of packet retransmissions due to drop. However, the sender and receiver view of a particular connection cannot tell which gateways on the network are congested, and cannot distinguish between propagation delay and queuing delay. Only the gateway has a true view of the state of the queue, the link share of the connections passing through it at any given time, and the quality of service requirements of the
traffic flows. The RED gateway monitors the average queue length, which detects early signs of impending congestion (average queue length exceeding a predetermined threshold) and reacts appropriately in one of two ways:
− Drop incoming packets with a certain probability, to indirectly inform the source of congestion, the source needs to reduce the transmission rate to keep the queue from filling up, maintaining the ability to absorb incoming traffic spikes.
− Mark “congestion” with a certain probability in the ECN field in the header of TCP packets to notify the source (the receiving entity will copy this bit into the acknowledgement packet).
Figure 3. 1 RED algorithm
The main goal of RED is to avoid congestion by keeping the average queue size within a sufficiently small and stable region, which also means keeping the queuing delay sufficiently small and stable. Achieving this goal also helps: avoid global synchronization, not resist bursty traffic flows (i.e. flows with low average throughput but high volatility), and maintain an upper bound on the average queue size even in the absence of cooperation from transport layer protocols.
To achieve the above goals, RED gateways must do the following:
− The first is to detect congestion early and react appropriately to keep the average queue size small enough to keep the network operating in the low latency, high throughput region, while still allowing the queue size to fluctuate within a certain range to absorb short-term fluctuations. As discussed above, the gateway is the most appropriate place to detect congestion and is also the most appropriate place to decide which specific connection to report congestion to.
− The second thing is to notify the source of congestion. This is done by marking and notifying the source to reduce traffic. Normally the RED gateway will randomly drop packets. However, if congestion
If congestion is detected before the queue is full, it should be combined with packet marking to signal congestion. The RED gateway has two options: drop or mark; where marking is done by marking the ECN field of the packet with a certain probability, to signal the source to reduce the traffic entering the network.
− An important goal that RED gateways need to achieve is to avoid global synchronization and not to resist traffic flows that have a sudden characteristic. Global synchronization occurs when all connections simultaneously reduce their transmission window size, leading to a severe drop in throughput at the same time. On the other hand, Drop Tail or Random Drop strategies are very sensitive to sudden flows; that is, the gateway queue will often overflow when packets from these flows arrive. To avoid these two phenomena, gateways can use special algorithms to detect congestion and decide which connections will be notified of congestion at the gateway. The RED gateway randomly selects incoming packets to mark; with this method, the probability of marking a packet from a particular connection is proportional to the connections shared bandwidth at the gateway.
− Another goal is to control the average queue size even without cooperation from the source entities. This can be done by dropping packets when the average size exceeds an upper threshold (instead of marking it). This approach is necessary in cases where most connections have transmission times that are less than the round-trip time, or where the source entities are not able to reduce traffic in response to marking or dropping packets (such as UDP flows).
3.2.2 Algorithm
This section describes the algorithm for RED gateways. RED gateways calculate the average queue size using a low-pass filter. This average queue size is compared with two thresholds: minth and maxth. When the average queue size is less than the lower threshold, no incoming packets are marked or dropped; when the average queue size is greater than the upper threshold, all incoming packets are dropped. When the average queue size is between minth and maxth, each incoming packet is marked or dropped with a probability pa, where pa is a function of the average queue size avg; the probability of marking or dropping a packet for a particular connection is proportional to the bandwidth share of that connection at the gateway. The general algorithm for a RED gateway is described as follows: [5]
For each packet arrival
Caculate the average queue size avg If minth ≤ avg < maxth
div.maincontent .s1 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 15pt; }
div.maincontent .s2 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .p { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent p { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent .s3 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s4 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s5 { color: black; font-family:Times New Roman, serif; font-style: italic; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s6 { color: black; font-family:Times New Roman, serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s7 { color: black; font-family:Wingdings; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s8 { color: black; font-family:Arial, sans-serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .s9 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s10 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 9pt; vertical-align: 6pt; }
div.maincontent .s11 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 13pt; }
div.maincontent .s12 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 10pt; }
div.maincontent .s13 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-d](https://tailieuthamkhao.com/uploads/2022/05/15/danh-gia-hieu-qua-dam-bao-qos-cho-truyen-thong-da-phuong-tien-cua-chien-6-1-120x90.jpg)


