However, in cases where proteins are overheated or thermally degraded, amino acids may be excreted in the urine as metabolites, leading to miscalculation of digestibility [66], [143].
Research in the field of amino acid digestibility has increased rapidly since the development of the fasting assay in North America and Europe [128], [211]. In this method, adult male chickens are fasted for 24–48 h. A quantity of the experimental feed (typically 30–50 g) is then administered directly into the crop. Over the next 48 h, all fecal matter is collected and quantified. Endogenous amino acid concentrations are determined from the feces of fasted chickens or chickens fed a nitrogen-free diet [128], [211]. Fecal-based assessments of amino acid digestibility have been criticized due to the effects of hindgut microorganisms on the utilization of dietary protein and the contribution of microbial proteins to the amount of amino acids excreted in feces [34]. Although the presence of microorganisms in the caecum and large intestine of poultry has been demonstrated, the actual impact of this microflora on protein nutrition remains unclear [184]. Microbial activity in the caecum may alter the amino acid composition of the waste, which may influence the calculated digestibility values based on waste analysis. Several studies have described the role of the caecum in determining digestibility. Nitsan and Alumot (1963) provided evidence of proteolytic activity by caecal microorganisms [156]. Later, Isshiki et al. (1974) showed that caecal contents can hydrolyze proteins [101]. Payne et al. (1971) also showed that the apparent digestibility of many amino acids in fish meal was significantly reduced, especially threonine, when caecal-resected chickens were tested [171]. According to Parson et al. (1982), amino acids metabolized by microorganisms can affect the microbial protein content of waste and this protein can account for up to 25% of the total protein of waste [168]. Therefore, the use of cecum-resected chickens is now widely accepted in studies evaluating amino acid digestibility by waste analysis because it overcomes the influence of microorganisms in
cecum [82], [104], [106], [167].
Ileal digestion
Payne et al. (1968) were the first group to suggest that ileal fluid analysis was a more reliable method for assessing protein and amino acid digestibility than faeces analysis [170]. Several studies have been conducted to compare faeces analysis and ileal fluid analysis in assessing amino acid digestibility in a variety of feeds such as corn, sorghum, wheat, soybean meal, rapeseed meal, meat and bone meal, fish meal, feather meal, etc. [88], [191], [192]. The differences between ileal and total digestibility in these studies indicate that amino acid metabolism by the large intestinal microflora of chickens occurs and that amino acid digestibility determined in the terminal ileum is more accurate than that determined in faeces [184]. The method of determining amino acid digestibility through ileal fluid analysis has the advantage of being applied to ad libitum feeding regimes and can use poultry of different ages [76].
Maybe you are interested!
-
Qos Assurance Methods for Multimedia Communications
zt2i3t4l5ee
zt2a3gs
zt2a3ge
zc2o3n4t5e6n7ts
low. The EF PHB requires a sufficiently large number of output ports to provide low delay, low loss, and low jitter.
EF PHBs can be implemented if the output port's bandwidth is sufficiently large, combined with small buffer sizes and other network resources dedicated to EF packets, to allow the router's service rate for EF packets on an output port to exceed the arrival rate λ of packets at that port.
This means that packets with PHB EF are considered with a pre-allocated amount of output bandwidth and a priority that ensures minimum loss, minimum delay and minimum jitter before being put into operation.
PHB EF is suitable for channel simulation, leased line simulation, and real-time services such as voice, video without compromising on high loss, delay and jitter values.
Figure 2.10 Example of EF installation
Figure 2.10 shows an example of an EF PHB implementation. This is a simple priority queue scheduling technique. At the edges of the DS domain, EF packet traffic is prioritized according to the values agreed upon by the SLA. The EF queue in the figure needs to output packets at a rate higher than the packet arrival rate λ. To provide an EF PHB over an end-to-end DS domain, bandwidth at the output ports of the core routers needs to be allocated in advance to ensure the requirement μ > λ. This can be done by a pre-configured provisioning process. In the figure, EF packets are placed in the priority queue (the upper queue). With such a length, the queue can operate with μ > λ.
Since EF was primarily used for real-time services such as voice and video, and since real-time services use UDP instead of TCP, RED is generally
not suitable for EF queues because applications using UDP will not respond to random packet drop and RED will strip unnecessary packets.
2.2.4.2 Assured Forwarding (AF) PHB
PHB AF is defined by RFC 2597. The purpose of PHB AF is to deliver packets reliably and therefore delay and jitter are considered less important than packet loss. PHB AF is suitable for non-real-time services such as applications using TCP. PHB AF first defines four classes: AF1, AF2, AF3, AF4. For each of these AF classes, packets are then classified into three subclasses with three distinct priority levels.
Table 2.8 shows the four AF classes and 12 AF subclasses and the DSCP values for the 12 AF subclasses defined by RFC 2597. RFC 2597 also allows for more than three separate priority levels to be added for internal use. However, these separate priority levels will only have internal significance.
PHB Class
PHB Subclass
Package type
DSCP
AF4
AF41
Short
100010
AF42
Medium
100100
AF43
High
100110
AF3
AF31
Short
011010
AF32
Medium
011100
AF33
High
011110
AF2
AF21
Short
010010
AF22
Medium
010100
AF23
High
010110
AF1
AF11
Short
001010
AF12
Medium
001100
AF13
High
001110
Table 2.8 AF DSCPs
The AF PHB ensures that packets are forwarded with a high probability of delivery to the destination within the bounds of the rate agreed upon in an SLA. If AF traffic at an ingress port exceeds the pre-priority rate, which is considered non-compliant or “out of profile”, the excess packets will not be delivered to the destination with the same probability as the packets belonging to the defined traffic or “in profile” packets. When there is network congestion, the out of profile packets are dropped before the in profile packets are dropped.
When service levels are defined using AF classes, different quantity and quality between AF classes can be realized by allocating different amounts of bandwidth and buffer space to the four AF classes. Unlike
EF, most AF traffic is non-real-time traffic using TCP, and the RED queue management strategy is an AQM (Adaptive Queue Management) strategy suitable for use in AF PHBs. The four AF PHB layers can be implemented as four separate queues. The output port bandwidth is divided into four AF queues. For each AF queue, packets are marked with three “colors” corresponding to three separate priority levels.
In addition to the 32 DSCP 1 groups defined in Table 2.8, 21 DSCPs have been standardized as follows: one for PHB EF, 12 for PHB AF, and 8 for CSCP. There are 11 DSCP 1 groups still available for other standards.
2.2.5.Example of Differentiated Services
We will look at an example of the Differentiated Service model and mechanism of operation. The architecture of Differentiated Service consists of two basic sets of functions:
Edge functions: include packet classification and traffic conditioning. At the inbound edge of the network, incoming packets are marked. In particular, the DS field in the packet header is set to a certain value. For example, in Figure 2.12, packets sent from H1 to H3 are marked at R1, while packets from H2 to H4 are marked at R2. The labels on the received packets identify the service class to which they belong. Different traffic classes receive different services in the core network. The RFC definition uses the term behavior aggregate rather than the term traffic class. After being marked, a packet can be forwarded immediately into the network, delayed for a period of time before being forwarded, or dropped. We will see that there are many factors that affect how a packet is marked, and whether it is forwarded immediately, delayed, or dropped.
Figure 2.12 DiffServ Example
Core functionality: When a DS-marked packet arrives at a Diffservcapable router, the packet is forwarded to the next router based on
Per-hop behavior is associated with packet classes. Per-hop behavior affects router buffers and the bandwidth shared between competing classes. An important principle of the Differentiated Service architecture is that a router's per-hop behavior is based only on the packet's marking or the class to which it belongs. Therefore, if packets sent from H1 to H3 as shown in the figure receive the same marking as packets from H2 to H4, then the network routers treat the packets exactly the same, regardless of whether the packet originated from H1 or H2. For example, R3 does not distinguish between packets from h1 and H2 when forwarding packets to R4. Therefore, the Differentiated Service architecture avoids the need to maintain router state about separate source-destination pairs, which is important for network scalability.
Chapter Conclusion
Chapter 2 has presented and clarified two main models of deploying and installing quality of service in IP networks. While the traditional best-effort model has many disadvantages, later models such as IntServ and DiffServ have partly solved the problems that best-effort could not solve. IntServ follows the direction of ensuring quality of service for each separate flow, it is built similar to the circuit switching model with the use of the RSVP resource reservation protocol. IntSer is suitable for services that require fixed bandwidth that is not shared such as VoIP services, multicast TV services. However, IntSer has disadvantages such as using a lot of network resources, low scalability and lack of flexibility. DiffServ was born with the idea of solving the disadvantages of the IntServ model.
DiffServ follows the direction of ensuring quality based on the principle of hop-by-hop behavior based on the priority of marked packets. The policy for different types of traffic is decided by the administrator and can be changed according to reality, so it is very flexible. DiffServ makes better use of network resources, avoiding idle bandwidth and processing capacity on routers. In addition, the DifServ model can be deployed on many independent domains, so the ability to expand the network becomes easy.
Chapter 3: METHODS TO ENSURE QoS FOR MULTIMEDIA COMMUNICATIONS
In packet-switched networks, different packet flows often have to share the transmission medium all the way to the destination station. To ensure the fair and efficient allocation of bandwidth to flows, appropriate serving mechanisms are required at network nodes, especially at gateways or routers, where many different data flows often pass through. The scheduler is responsible for serving packets of the selected flow and deciding which packet will be served next. Here, a flow is understood as a set of packets belonging to the same priority class, or originating from the same source, or having the same source and destination addresses, etc.
In normal state when there is no congestion, packets will be sent as soon as they are delivered. In case of congestion, if QoS assurance methods are not applied, prolonged congestion can cause packet drops, affecting service quality. In some cases, congestion is prolonged and widespread in the network, which can easily lead to the network being "frozen", or many packets being dropped, seriously affecting service quality.
Therefore, in this chapter, in sections 3.2 and 3.3, we introduce some typical network traffic load monitoring techniques to predict and prevent congestion before it occurs through the measure of dropping (removing) packets early when there are signs of impending congestion.
3.1. DropTail method
DropTail is a simple, traditional queue management method based on FIFO mechanism. All incoming packets are placed in the queue, when the queue is full, the later packets are dropped.
Due to its simplicity and ease of implementation, DropTail has been used for many years on Internet router systems. However, this algorithm has the following disadvantages:
− Cannot avoid the phenomenon of “Lock out”: Occurs when 1 or several traffic streams monopolize the queue, making packets of other connections unable to pass through the router. This phenomenon greatly affects reliable transmission protocols such as TCP. According to the anti-congestion algorithm, when locked out, the TCP connection stream will reduce the window size and reduce the packet transmission speed exponentially.
− Can cause Global Synchronization: This is the result of a severe “Lock out” phenomenon. Some neighboring routers have their queues monopolized by a number of connections, causing a series of other TCP connections to be unable to pass through and simultaneously reducing the transmission speed. After those monopolized connections are temporarily suspended,
Once the queue is cleared, it takes a considerable amount of time for TCP connections to return to their original speed.
− Full Queue phenomenon: Data transmitted on the Internet often has an explosion, packets arriving at the router are often in clusters rather than in turn. Therefore, the operating mechanism of DropTail makes the queue easily full for a long period of time, leading to the average delay time of large packets. To avoid this phenomenon, with DropTail, the only way is to increase the router's buffer, this method is very expensive and ineffective.
− No QoS guarantee: With the DropTail mechanism, there is no way to prioritize important packets to be transmitted through the router earlier when all are in the queue. Meanwhile, with multimedia communication, ensuring connection and stable speed is extremely important and the DropTail algorithm cannot satisfy.
The problem of choosing the buffer size of the routers in the network is to “absorb” short bursts of traffic without causing too much queuing delay. This is necessary in bursty data transmission. The queue size determines the size of the packet bursts (traffic spikes) that we want to be able to transmit without being dropped at the routers.
In IP-based application networks, packet dropping is an important mechanism for indirectly reporting congestion to end stations. A solution that prevents router queues from filling up while reducing the packet drop rate is called dynamic queue management.
3.2. Random elimination method – RED
3.2.1 Overview
RED (Random Early Detection of congestion; Random Early Drop) is one of the first AQM algorithms proposed in 1993 by Sally Floyd and Van Jacobson, two scientists at the Lawrence Berkeley Laboratory of the University of California, USA. Due to its outstanding advantages compared to previous queue management algorithms, RED has been widely installed and deployed on the Internet.
The most fundamental point of their work is that the most effective place to detect congestion and react to it is at the gateway or router.
Source entities (senders) can also do this by estimating end-to-end delay, throughput variability, or the rate of packet retransmissions due to drop. However, the sender and receiver view of a particular connection cannot tell which gateways on the network are congested, and cannot distinguish between propagation delay and queuing delay. Only the gateway has a true view of the state of the queue, the link share of the connections passing through it at any given time, and the quality of service requirements of the
traffic flows. The RED gateway monitors the average queue length, which detects early signs of impending congestion (average queue length exceeding a predetermined threshold) and reacts appropriately in one of two ways:
− Drop incoming packets with a certain probability, to indirectly inform the source of congestion, the source needs to reduce the transmission rate to keep the queue from filling up, maintaining the ability to absorb incoming traffic spikes.
− Mark “congestion” with a certain probability in the ECN field in the header of TCP packets to notify the source (the receiving entity will copy this bit into the acknowledgement packet).
Figure 3. 1 RED algorithm
The main goal of RED is to avoid congestion by keeping the average queue size within a sufficiently small and stable region, which also means keeping the queuing delay sufficiently small and stable. Achieving this goal also helps: avoid global synchronization, not resist bursty traffic flows (i.e. flows with low average throughput but high volatility), and maintain an upper bound on the average queue size even in the absence of cooperation from transport layer protocols.
To achieve the above goals, RED gateways must do the following:
− The first is to detect congestion early and react appropriately to keep the average queue size small enough to keep the network operating in the low latency, high throughput region, while still allowing the queue size to fluctuate within a certain range to absorb short-term fluctuations. As discussed above, the gateway is the most appropriate place to detect congestion and is also the most appropriate place to decide which specific connection to report congestion to.
− The second thing is to notify the source of congestion. This is done by marking and notifying the source to reduce traffic. Normally the RED gateway will randomly drop packets. However, if congestion
If congestion is detected before the queue is full, it should be combined with packet marking to signal congestion. The RED gateway has two options: drop or mark; where marking is done by marking the ECN field of the packet with a certain probability, to signal the source to reduce the traffic entering the network.
− An important goal that RED gateways need to achieve is to avoid global synchronization and not to resist traffic flows that have a sudden characteristic. Global synchronization occurs when all connections simultaneously reduce their transmission window size, leading to a severe drop in throughput at the same time. On the other hand, Drop Tail or Random Drop strategies are very sensitive to sudden flows; that is, the gateway queue will often overflow when packets from these flows arrive. To avoid these two phenomena, gateways can use special algorithms to detect congestion and decide which connections will be notified of congestion at the gateway. The RED gateway randomly selects incoming packets to mark; with this method, the probability of marking a packet from a particular connection is proportional to the connection's shared bandwidth at the gateway.
− Another goal is to control the average queue size even without cooperation from the source entities. This can be done by dropping packets when the average size exceeds an upper threshold (instead of marking it). This approach is necessary in cases where most connections have transmission times that are less than the round-trip time, or where the source entities are not able to reduce traffic in response to marking or dropping packets (such as UDP flows).
3.2.2 Algorithm
This section describes the algorithm for RED gateways. RED gateways calculate the average queue size using a low-pass filter. This average queue size is compared with two thresholds: minth and maxth. When the average queue size is less than the lower threshold, no incoming packets are marked or dropped; when the average queue size is greater than the upper threshold, all incoming packets are dropped. When the average queue size is between minth and maxth, each incoming packet is marked or dropped with a probability pa, where pa is a function of the average queue size avg; the probability of marking or dropping a packet for a particular connection is proportional to the bandwidth share of that connection at the gateway. The general algorithm for a RED gateway is described as follows: [5]
For each packet arrival
Caculate the average queue size avg If minth ≤ avg < maxth
div.maincontent .s1 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 15pt; }
div.maincontent .s2 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .p { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent p { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent .s3 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s4 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s5 { color: black; font-family:"Times New Roman", serif; font-style: italic; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s6 { color: black; font-family:"Times New Roman", serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s7 { color: black; font-family:Wingdings; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s8 { color: black; font-family:Arial, sans-serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .s9 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s10 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 9pt; vertical-align: 6pt; }
div.maincontent .s11 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 13pt; }
div.maincontent .s12 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 10pt; }
div.maincontent .s13 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-d -
Situation of Information Technology Application in Enterprises -
Perspectives on Improving the Quality of Law Application in Resolving Land Use Rights Disputes at the People's Court -
Methods of Organizing Life Values Education Activities -
Concept of Law Application in First Instance Trials of Juvenile Criminals of the People's Court
To assess the ileal digestibility, indigestible indicators were added to the diet. Many substances have been used as indicators in digestion experiments, the most common of which are Cr 2 O 3 and AIA. The use of indicators in nutritional studies was summarized and published by Kotb and Luckey (1972) [116].
Ileal digestion can be assessed in two ways, depending on the ileal fluid collection technique. The simplest method for collecting ileal fluid is to kill the birds; the second method is to use a cannula inserted into the mid-ileum [177], [232]. In previous studies, chickens were killed by neck dislocation. However, this method has been criticized because it increases the endogenous protein content due to the shedding of mucosal cells into the intestinal tract at the time of slaughter [184]. Currently, chemical killing of poultry (such as pentobarbitone sodium) is commonly used because this method reduces intestinal motility and mucosal shedding to a minimum compared to traditional slaughter techniques [19].

To overcome the disadvantages in poultry slaughtering, the ileal cannula placement method has been used by some researchers [84], [177]. Comparing the slaughtering technique (by anesthesia) with the cannula placement technique, Johns et al. (1986b)
showed that ileal digestibility of most amino acids determined in cannulated roosters was significantly lower than that determined in stunning grown chickens, with the exception of arginine and glutamic acid [105]. However, the use of ileal cannulation is limited by problems associated with cannula removal, fluctuations in digestive fluid flow, and the need for appropriate markers [184]. In addition, Tanksley et al. (1981) suggested that physiological changes to the intestine caused by cannulation may interfere with normal physiological processes in the animal [229]. In addition, digestibility values determined by cannulation in adult chickens may not reflect digestion in rapidly growing broilers [32], [74]. Therefore, slaughtering poultry and collecting ileal fluid to assess amino acid digestion is the most commonly chosen method [114], [189].
Methods for determining endogenous amino acids include the classical method, the peptide feeding technique combined with ultrafiltration, the isotopic marker technique, and the homoarginine technique. The classical methods used for determining endogenous amino acids include the nitrogen-free diet method, the regression method, and the starvation method. In the former method, experimental animals are fed a nitrogen-free diet; the faeces are used to assess the amino acid content. In the regression method, experimental animals are fed diets in which the experimental feed has increasing content. The amino acid content is then analyzed in the faeces or ileal fluid. Endogenous amino acid losses are estimated by extrapolating the regression line to zero amino acid intake. In the starvation method, experimental roosters are fasted for 24-48 hours, and then the faeces are collected to assess the amino acid content. The starvation method is one of the classical methods used in many studies to estimate the basic endogenous amino acids in poultry. The above techniques have been performed by many research groups on poultry [25], [74], [162], [224].
Yamazaki's (1983) study comparing starvation and nitrogen-free diets showed that endogenous amino acid excretion was similar in both methods [254]. In contrast, Muztar and Slinger's (1980) study showed that endogenous amino acids should be determined in birds fed nitrogen-free diets, not in fasted chickens [152]. According to Nasset (1965), nitrogen-free diets provide a suitable stimulus to the gastrointestinal tract for the secretion of endogenous proteins [154]. However, these techniques have been criticized because during starvation or when there is no protein in the diet, the body is in a negative nitrogen balance and the rate of whole-body protein synthesis is rapidly reduced. This may affect the influx of proteins into the intestine [184]. According to Low (1990), the use of nitrogen-free diets is not appropriate because the absence of protein in the diet causes very large changes in metabolism and the animal is no longer in a normal physiological state [134]. For these reasons, the correction of digestibility based on classical methods will result in a decrease in the true digestibility compared to reality [22]. Many studies have shown that endogenous amino acid losses vary depending on the protein source [219], dietary protein content [30], dietary fiber content [219] and the presence of anti-nutritional factors [16]. This suggests that the use of a single output value from classical methods to correct for endogenous losses is unreliable for different feeds [184].
With the regression analysis method, a regression equation is used to calculate the endogenous amino acid content at zero of the protein intake. However, this also leads to large errors in the estimation results, especially in cases where the minimum data point and the theoretical zero point of intake are far apart [184]. In addition, the technical complexity as well as the estimation of digestibility are the reasons why this method is not widely accepted, although it has been used to determine the ileal digestibility of some feeds [197], [209].
The peptide feeding method combined with ultrafiltration is a method used to estimate the endogenous amino acid content in the ileum when animals are fed peptides (from enzymatically hydrolyzed casein - EHC), then
Ileal fluid is filtered through an ultrafiltration system [150]. In this method, a semi-synthetic diet containing EHC is fed as the sole protein source to the animals. Ileal fluid is collected and nitrogen fractions are separated by centrifugation and ultrafiltration. Two main fractions are separated: a high molecular weight fraction (>10,000 Da) used to assess endogenous amino acids; a low molecular weight fraction containing unabsorbed dietary amino acids and small peptides, non-protein nitrogen, and low concentrations of endogenous free amino acids [184]. Although not as subject to criticism as classical methods, this method can only be used to correct ileal digestion of protein sources that do not contain fiber and/or antinutritional factors, such as animal protein meals [53]. This technique may also give an underestimate of endogenous amino acids because some endogenous free amino acids and small endogenous peptides may be removed in the low molecular weight fraction [36], [123].
The technique of using isotopic markers has been used in many studies to determine the content of endogenous amino acids over the years. The isotopes used in this technique include stable isotopes ( 15 N) and radioactive isotopes ( 14 C, 35 S, 75 Se). The 15 N isotope dilution technique of Souffrant et al. (1982, tdt [184]) has been used by many research groups to distinguish endogenous proteins from undigested dietary proteins [48], [199]. The results of the above research groups showed that the content of endogenous proteins in ileal fluid determined by the isotope dilution technique was higher than the results obtained when using nitrogen-free diets.
Despite the interest of nutritionists, this method has several limitations. The enrichment of 15N in endogenous secretions for analysis is not easy. In addition, the inability to assess all amino acids recovered in the ileal fluid ([48], [126]) and the recovery of precursor pools [124] are also disadvantages of this technique. Standardization of conditions such as feeding frequency, diet type, tracer transfer rate and protocol, sampling technique, sample handling, and selection of precursor pools is essential for comparison of data.
for high reliability [73].
The homoarginine method is a technique developed by Hagemeister and Erbersdobler in 1985 [184]. This method uses homoarginine as a marker to determine the content of endogenous amino acids. The residual lysine in the feed protein is converted to homoarginine by guanidine treatment with O-methylisourea in an alkaline medium [138]. After animals are fed the labeled protein, the content of endogenous amino acids is determined by comparing the amino acid: homoarginine ratio in the diet and in the ileal fluid. Homoarginine is not present in common feeds [179]. Although homoarginine is digested and absorbed like other amino acids, it does not reappear in the endogenous intestinal secretions [218]. Due to these properties, the homoarginine technique has important advantages over isotopic labeling techniques. The homoarginine method has been used to determine endogenous amino acid content and estimate amino acid digestibility in poultry [16], [17], [218].
Recently, a group of poultry nutritionists completed a number of studies quantifying endogenous amino acid losses in broilers and turkeys during the first 3 weeks of life to establish a baseline for correcting ileal digestible amino acid values. Although the optimal baseline endogenous correction values were not determined, these studies indicated that the use of a protein-free diet was best because it resulted in a lower baseline endogenous correction than did a casein-based diet [20].
Method of assessing apparent ileal digestibility
The apparent digestibility of amino acids in a feed can be assessed by methods such as the direct method, the difference method, and the regression method. In the direct method, the experimental diet is formulated on the principle that the experimental feed is the sole source of protein in the diet. In the case of cereals, 1 kg of the experimental diet usually contains 918 g of experimental feed, 20 g of vegetable oil, and 42 g of mineral and vitamin supplements. In the case of protein powder, dextrose is added to adjust the total protein content of the diet to about 16-20%. Inorganic calcium and phosphorus supplements are given.
in the diets if the experimental feed is a source of vegetable protein, blood meal and feather meal. Feeds such as fish meal, meat meal and meat and bone meal contain high levels of calcium and phosphorus, so inorganic calcium and phosphorus supplements will not be included in the experimental diet. For experimental feeds of animal protein, Solkafloc or pulp is added to the diet at a rate of 30 g/kg to increase the fiber content in the diet. In addition, vitamins and trace minerals, indigestible indicators such as Cr 2 O 3 , AIA and TiO 2 are also added to the diet. Digestible carbohydrates such as dextrose and vegetable oils are sources of energy in the diet [32], [190].
The difference method is based on the assumption that there is no interaction between the basal diet and the test feed. In the difference method, two diets are used in the digestibility test, including the basal diet and the test diet. The basal diet contains the basic feed ingredients. The test diet is formulated by replacing a portion of the basal diet with the test feed (usually a 50 : 50 ratio). The digestibility of the test feed is calculated based on the difference in digestibility of the two diets and the ratio of each amino acid in the diets [153].
The third method used to evaluate amino acid digestibility is the regression method. In this method, chickens are fed diets with increasing concentrations of the experimental feed (usually four concentration levels). The ileal digestibility of each diet is calculated separately. Then, the ileal amino acid digestibility of the experimental feed is calculated by linear regression [197].
The minimum digestibility test should be performed in four replicates. The number of birds in each replicate depends on the age and the amount of digesta to be collected. Typically, the amino acid digestibility test in poultry is performed on chickens aged 35–42 days [32]. The birds are fed the experimental diets for at least three days before ileal fluid collection [32], [190]. To minimize the impact of intestinal motility, a slaughter method using sodium pentobarbitone injection is often used. Digestive fluid from the distal half of the ileum is collected with distilled water to avoid stripping the intestinal mucosa as is the case with hand pressing [190].
1.4. Application of digestible amino acid values in diet formulation
Digestible amino acid values are gaining interest as a basis for poultry diet formulation. The major advantage of formulating diets based on digestible amino acids is that it is possible to increase the proportion of alternative ingredients, especially low-quality protein sources, in poultry diets [32]. This will allow for a wider range of ingredients to be used in the diet while maintaining growth [33].
In the past, attempts to partially replace soybean meal with poorly digestible feedstuffs in broiler diets have resulted in lower growth than expected because the substitutions often did not take into account the low amino acid digestibility of these feedstuffs. However, many studies have now shown the beneficial effects of using digestible amino acids in poultry diet formulation on increasing the proportion of poorly digestible feedstuffs, such as cottonseed meal, rapeseed meal and meat and bone meal [67], [188], [182], [183], [249].
The application of digestible amino acid values in diet formulation can be done by several methods [168]. The methods differ mainly in the degree to which the feed ingredient/requirement matrix is modified [168]. The most comprehensive method is to convert total amino acid requirements into digestible amino acid requirements. Parsons (1991) reviewed 28 published studies on lysine and total sulfur amino acid requirements for broilers, turkeys, and laying hens and concluded that digestible amino acid requirements are 8% to 10% lower than total amino acid requirements [167].
Another less used method is to accept the amino acid values for corn and soybean meal and the amino acid requirements as total content; and to change the total content values in the grain feeds and protein-rich ingredients relative to corn and soybean meal based on relative digestibility [168]. Thus, in this method, corn and soybean meal are the reference values [168]. The advantage of this method is that there is less variation in the feed matrix [168]. However, this method has 2

![Qos Assurance Methods for Multimedia Communications
zt2i3t4l5ee
zt2a3gs
zt2a3ge
zc2o3n4t5e6n7ts
low. The EF PHB requires a sufficiently large number of output ports to provide low delay, low loss, and low jitter.
EF PHBs can be implemented if the output ports bandwidth is sufficiently large, combined with small buffer sizes and other network resources dedicated to EF packets, to allow the routers service rate for EF packets on an output port to exceed the arrival rate λ of packets at that port.
This means that packets with PHB EF are considered with a pre-allocated amount of output bandwidth and a priority that ensures minimum loss, minimum delay and minimum jitter before being put into operation.
PHB EF is suitable for channel simulation, leased line simulation, and real-time services such as voice, video without compromising on high loss, delay and jitter values.
Figure 2.10 Example of EF installation
Figure 2.10 shows an example of an EF PHB implementation. This is a simple priority queue scheduling technique. At the edges of the DS domain, EF packet traffic is prioritized according to the values agreed upon by the SLA. The EF queue in the figure needs to output packets at a rate higher than the packet arrival rate λ. To provide an EF PHB over an end-to-end DS domain, bandwidth at the output ports of the core routers needs to be allocated in advance to ensure the requirement μ > λ. This can be done by a pre-configured provisioning process. In the figure, EF packets are placed in the priority queue (the upper queue). With such a length, the queue can operate with μ > λ.
Since EF was primarily used for real-time services such as voice and video, and since real-time services use UDP instead of TCP, RED is generally
not suitable for EF queues because applications using UDP will not respond to random packet drop and RED will strip unnecessary packets.
2.2.4.2 Assured Forwarding (AF) PHB
PHB AF is defined by RFC 2597. The purpose of PHB AF is to deliver packets reliably and therefore delay and jitter are considered less important than packet loss. PHB AF is suitable for non-real-time services such as applications using TCP. PHB AF first defines four classes: AF1, AF2, AF3, AF4. For each of these AF classes, packets are then classified into three subclasses with three distinct priority levels.
Table 2.8 shows the four AF classes and 12 AF subclasses and the DSCP values for the 12 AF subclasses defined by RFC 2597. RFC 2597 also allows for more than three separate priority levels to be added for internal use. However, these separate priority levels will only have internal significance.
PHB Class
PHB Subclass
Package type
DSCP
AF4
AF41
Short
100010
AF42
Medium
100100
AF43
High
100110
AF3
AF31
Short
011010
AF32
Medium
011100
AF33
High
011110
AF2
AF21
Short
010010
AF22
Medium
010100
AF23
High
010110
AF1
AF11
Short
001010
AF12
Medium
001100
AF13
High
001110
Table 2.8 AF DSCPs
The AF PHB ensures that packets are forwarded with a high probability of delivery to the destination within the bounds of the rate agreed upon in an SLA. If AF traffic at an ingress port exceeds the pre-priority rate, which is considered non-compliant or “out of profile”, the excess packets will not be delivered to the destination with the same probability as the packets belonging to the defined traffic or “in profile” packets. When there is network congestion, the out of profile packets are dropped before the in profile packets are dropped.
When service levels are defined using AF classes, different quantity and quality between AF classes can be realized by allocating different amounts of bandwidth and buffer space to the four AF classes. Unlike
EF, most AF traffic is non-real-time traffic using TCP, and the RED queue management strategy is an AQM (Adaptive Queue Management) strategy suitable for use in AF PHBs. The four AF PHB layers can be implemented as four separate queues. The output port bandwidth is divided into four AF queues. For each AF queue, packets are marked with three “colors” corresponding to three separate priority levels.
In addition to the 32 DSCP 1 groups defined in Table 2.8, 21 DSCPs have been standardized as follows: one for PHB EF, 12 for PHB AF, and 8 for CSCP. There are 11 DSCP 1 groups still available for other standards.
2.2.5.Example of Differentiated Services
We will look at an example of the Differentiated Service model and mechanism of operation. The architecture of Differentiated Service consists of two basic sets of functions:
Edge functions: include packet classification and traffic conditioning. At the inbound edge of the network, incoming packets are marked. In particular, the DS field in the packet header is set to a certain value. For example, in Figure 2.12, packets sent from H1 to H3 are marked at R1, while packets from H2 to H4 are marked at R2. The labels on the received packets identify the service class to which they belong. Different traffic classes receive different services in the core network. The RFC definition uses the term behavior aggregate rather than the term traffic class. After being marked, a packet can be forwarded immediately into the network, delayed for a period of time before being forwarded, or dropped. We will see that there are many factors that affect how a packet is marked, and whether it is forwarded immediately, delayed, or dropped.
Figure 2.12 DiffServ Example
Core functionality: When a DS-marked packet arrives at a Diffservcapable router, the packet is forwarded to the next router based on
Per-hop behavior is associated with packet classes. Per-hop behavior affects router buffers and the bandwidth shared between competing classes. An important principle of the Differentiated Service architecture is that a routers per-hop behavior is based only on the packets marking or the class to which it belongs. Therefore, if packets sent from H1 to H3 as shown in the figure receive the same marking as packets from H2 to H4, then the network routers treat the packets exactly the same, regardless of whether the packet originated from H1 or H2. For example, R3 does not distinguish between packets from h1 and H2 when forwarding packets to R4. Therefore, the Differentiated Service architecture avoids the need to maintain router state about separate source-destination pairs, which is important for network scalability.
Chapter Conclusion
Chapter 2 has presented and clarified two main models of deploying and installing quality of service in IP networks. While the traditional best-effort model has many disadvantages, later models such as IntServ and DiffServ have partly solved the problems that best-effort could not solve. IntServ follows the direction of ensuring quality of service for each separate flow, it is built similar to the circuit switching model with the use of the RSVP resource reservation protocol. IntSer is suitable for services that require fixed bandwidth that is not shared such as VoIP services, multicast TV services. However, IntSer has disadvantages such as using a lot of network resources, low scalability and lack of flexibility. DiffServ was born with the idea of solving the disadvantages of the IntServ model.
DiffServ follows the direction of ensuring quality based on the principle of hop-by-hop behavior based on the priority of marked packets. The policy for different types of traffic is decided by the administrator and can be changed according to reality, so it is very flexible. DiffServ makes better use of network resources, avoiding idle bandwidth and processing capacity on routers. In addition, the DifServ model can be deployed on many independent domains, so the ability to expand the network becomes easy.
Chapter 3: METHODS TO ENSURE QoS FOR MULTIMEDIA COMMUNICATIONS
In packet-switched networks, different packet flows often have to share the transmission medium all the way to the destination station. To ensure the fair and efficient allocation of bandwidth to flows, appropriate serving mechanisms are required at network nodes, especially at gateways or routers, where many different data flows often pass through. The scheduler is responsible for serving packets of the selected flow and deciding which packet will be served next. Here, a flow is understood as a set of packets belonging to the same priority class, or originating from the same source, or having the same source and destination addresses, etc.
In normal state when there is no congestion, packets will be sent as soon as they are delivered. In case of congestion, if QoS assurance methods are not applied, prolonged congestion can cause packet drops, affecting service quality. In some cases, congestion is prolonged and widespread in the network, which can easily lead to the network being frozen, or many packets being dropped, seriously affecting service quality.
Therefore, in this chapter, in sections 3.2 and 3.3, we introduce some typical network traffic load monitoring techniques to predict and prevent congestion before it occurs through the measure of dropping (removing) packets early when there are signs of impending congestion.
3.1. DropTail method
DropTail is a simple, traditional queue management method based on FIFO mechanism. All incoming packets are placed in the queue, when the queue is full, the later packets are dropped.
Due to its simplicity and ease of implementation, DropTail has been used for many years on Internet router systems. However, this algorithm has the following disadvantages:
− Cannot avoid the phenomenon of “Lock out”: Occurs when 1 or several traffic streams monopolize the queue, making packets of other connections unable to pass through the router. This phenomenon greatly affects reliable transmission protocols such as TCP. According to the anti-congestion algorithm, when locked out, the TCP connection stream will reduce the window size and reduce the packet transmission speed exponentially.
− Can cause Global Synchronization: This is the result of a severe “Lock out” phenomenon. Some neighboring routers have their queues monopolized by a number of connections, causing a series of other TCP connections to be unable to pass through and simultaneously reducing the transmission speed. After those monopolized connections are temporarily suspended,
Once the queue is cleared, it takes a considerable amount of time for TCP connections to return to their original speed.
− Full Queue phenomenon: Data transmitted on the Internet often has an explosion, packets arriving at the router are often in clusters rather than in turn. Therefore, the operating mechanism of DropTail makes the queue easily full for a long period of time, leading to the average delay time of large packets. To avoid this phenomenon, with DropTail, the only way is to increase the routers buffer, this method is very expensive and ineffective.
− No QoS guarantee: With the DropTail mechanism, there is no way to prioritize important packets to be transmitted through the router earlier when all are in the queue. Meanwhile, with multimedia communication, ensuring connection and stable speed is extremely important and the DropTail algorithm cannot satisfy.
The problem of choosing the buffer size of the routers in the network is to “absorb” short bursts of traffic without causing too much queuing delay. This is necessary in bursty data transmission. The queue size determines the size of the packet bursts (traffic spikes) that we want to be able to transmit without being dropped at the routers.
In IP-based application networks, packet dropping is an important mechanism for indirectly reporting congestion to end stations. A solution that prevents router queues from filling up while reducing the packet drop rate is called dynamic queue management.
3.2. Random elimination method – RED
3.2.1 Overview
RED (Random Early Detection of congestion; Random Early Drop) is one of the first AQM algorithms proposed in 1993 by Sally Floyd and Van Jacobson, two scientists at the Lawrence Berkeley Laboratory of the University of California, USA. Due to its outstanding advantages compared to previous queue management algorithms, RED has been widely installed and deployed on the Internet.
The most fundamental point of their work is that the most effective place to detect congestion and react to it is at the gateway or router.
Source entities (senders) can also do this by estimating end-to-end delay, throughput variability, or the rate of packet retransmissions due to drop. However, the sender and receiver view of a particular connection cannot tell which gateways on the network are congested, and cannot distinguish between propagation delay and queuing delay. Only the gateway has a true view of the state of the queue, the link share of the connections passing through it at any given time, and the quality of service requirements of the
traffic flows. The RED gateway monitors the average queue length, which detects early signs of impending congestion (average queue length exceeding a predetermined threshold) and reacts appropriately in one of two ways:
− Drop incoming packets with a certain probability, to indirectly inform the source of congestion, the source needs to reduce the transmission rate to keep the queue from filling up, maintaining the ability to absorb incoming traffic spikes.
− Mark “congestion” with a certain probability in the ECN field in the header of TCP packets to notify the source (the receiving entity will copy this bit into the acknowledgement packet).
Figure 3. 1 RED algorithm
The main goal of RED is to avoid congestion by keeping the average queue size within a sufficiently small and stable region, which also means keeping the queuing delay sufficiently small and stable. Achieving this goal also helps: avoid global synchronization, not resist bursty traffic flows (i.e. flows with low average throughput but high volatility), and maintain an upper bound on the average queue size even in the absence of cooperation from transport layer protocols.
To achieve the above goals, RED gateways must do the following:
− The first is to detect congestion early and react appropriately to keep the average queue size small enough to keep the network operating in the low latency, high throughput region, while still allowing the queue size to fluctuate within a certain range to absorb short-term fluctuations. As discussed above, the gateway is the most appropriate place to detect congestion and is also the most appropriate place to decide which specific connection to report congestion to.
− The second thing is to notify the source of congestion. This is done by marking and notifying the source to reduce traffic. Normally the RED gateway will randomly drop packets. However, if congestion
If congestion is detected before the queue is full, it should be combined with packet marking to signal congestion. The RED gateway has two options: drop or mark; where marking is done by marking the ECN field of the packet with a certain probability, to signal the source to reduce the traffic entering the network.
− An important goal that RED gateways need to achieve is to avoid global synchronization and not to resist traffic flows that have a sudden characteristic. Global synchronization occurs when all connections simultaneously reduce their transmission window size, leading to a severe drop in throughput at the same time. On the other hand, Drop Tail or Random Drop strategies are very sensitive to sudden flows; that is, the gateway queue will often overflow when packets from these flows arrive. To avoid these two phenomena, gateways can use special algorithms to detect congestion and decide which connections will be notified of congestion at the gateway. The RED gateway randomly selects incoming packets to mark; with this method, the probability of marking a packet from a particular connection is proportional to the connections shared bandwidth at the gateway.
− Another goal is to control the average queue size even without cooperation from the source entities. This can be done by dropping packets when the average size exceeds an upper threshold (instead of marking it). This approach is necessary in cases where most connections have transmission times that are less than the round-trip time, or where the source entities are not able to reduce traffic in response to marking or dropping packets (such as UDP flows).
3.2.2 Algorithm
This section describes the algorithm for RED gateways. RED gateways calculate the average queue size using a low-pass filter. This average queue size is compared with two thresholds: minth and maxth. When the average queue size is less than the lower threshold, no incoming packets are marked or dropped; when the average queue size is greater than the upper threshold, all incoming packets are dropped. When the average queue size is between minth and maxth, each incoming packet is marked or dropped with a probability pa, where pa is a function of the average queue size avg; the probability of marking or dropping a packet for a particular connection is proportional to the bandwidth share of that connection at the gateway. The general algorithm for a RED gateway is described as follows: [5]
For each packet arrival
Caculate the average queue size avg If minth ≤ avg < maxth
div.maincontent .s1 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 15pt; }
div.maincontent .s2 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .p { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent p { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent .s3 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s4 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s5 { color: black; font-family:Times New Roman, serif; font-style: italic; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s6 { color: black; font-family:Times New Roman, serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s7 { color: black; font-family:Wingdings; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s8 { color: black; font-family:Arial, sans-serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .s9 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s10 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 9pt; vertical-align: 6pt; }
div.maincontent .s11 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 13pt; }
div.maincontent .s12 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 10pt; }
div.maincontent .s13 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-d](https://tailieuthamkhao.com/uploads/2022/05/15/danh-gia-hieu-qua-dam-bao-qos-cho-truyen-thong-da-phuong-tien-cua-chien-6-1-120x90.jpg)



