partially destroyed. Glutamine and asparagine dissociate into glutamic acid, aspartic acid and NH4 + and most vitamins are destroyed (Nguyen Duc Luong, 2004) .
1.6.1.2 Alkaline hydrolysis
Amino acids can also be obtained by hydrolysis with NaOH, heated for many hours. The products obtained are mostly amino acids but are racemized, reducing nutritional value, creating lysineolanine, reducing lysine in the composition, and amino acids such as cysteine, serine and treonine are destroyed. Therefore, this method is rarely used in the food industry (Nguyen Duc Luong, 2004).
Maybe you are interested!
-
Factors Affecting Agricultural Product Production -
Qos Assurance Methods for Multimedia Communications
zt2i3t4l5ee
zt2a3gs
zt2a3ge
zc2o3n4t5e6n7ts
low. The EF PHB requires a sufficiently large number of output ports to provide low delay, low loss, and low jitter.
EF PHBs can be implemented if the output port's bandwidth is sufficiently large, combined with small buffer sizes and other network resources dedicated to EF packets, to allow the router's service rate for EF packets on an output port to exceed the arrival rate λ of packets at that port.
This means that packets with PHB EF are considered with a pre-allocated amount of output bandwidth and a priority that ensures minimum loss, minimum delay and minimum jitter before being put into operation.
PHB EF is suitable for channel simulation, leased line simulation, and real-time services such as voice, video without compromising on high loss, delay and jitter values.
Figure 2.10 Example of EF installation
Figure 2.10 shows an example of an EF PHB implementation. This is a simple priority queue scheduling technique. At the edges of the DS domain, EF packet traffic is prioritized according to the values agreed upon by the SLA. The EF queue in the figure needs to output packets at a rate higher than the packet arrival rate λ. To provide an EF PHB over an end-to-end DS domain, bandwidth at the output ports of the core routers needs to be allocated in advance to ensure the requirement μ > λ. This can be done by a pre-configured provisioning process. In the figure, EF packets are placed in the priority queue (the upper queue). With such a length, the queue can operate with μ > λ.
Since EF was primarily used for real-time services such as voice and video, and since real-time services use UDP instead of TCP, RED is generally
not suitable for EF queues because applications using UDP will not respond to random packet drop and RED will strip unnecessary packets.
2.2.4.2 Assured Forwarding (AF) PHB
PHB AF is defined by RFC 2597. The purpose of PHB AF is to deliver packets reliably and therefore delay and jitter are considered less important than packet loss. PHB AF is suitable for non-real-time services such as applications using TCP. PHB AF first defines four classes: AF1, AF2, AF3, AF4. For each of these AF classes, packets are then classified into three subclasses with three distinct priority levels.
Table 2.8 shows the four AF classes and 12 AF subclasses and the DSCP values for the 12 AF subclasses defined by RFC 2597. RFC 2597 also allows for more than three separate priority levels to be added for internal use. However, these separate priority levels will only have internal significance.
PHB Class
PHB Subclass
Package type
DSCP
AF4
AF41
Short
100010
AF42
Medium
100100
AF43
High
100110
AF3
AF31
Short
011010
AF32
Medium
011100
AF33
High
011110
AF2
AF21
Short
010010
AF22
Medium
010100
AF23
High
010110
AF1
AF11
Short
001010
AF12
Medium
001100
AF13
High
001110
Table 2.8 AF DSCPs
The AF PHB ensures that packets are forwarded with a high probability of delivery to the destination within the bounds of the rate agreed upon in an SLA. If AF traffic at an ingress port exceeds the pre-priority rate, which is considered non-compliant or “out of profile”, the excess packets will not be delivered to the destination with the same probability as the packets belonging to the defined traffic or “in profile” packets. When there is network congestion, the out of profile packets are dropped before the in profile packets are dropped.
When service levels are defined using AF classes, different quantity and quality between AF classes can be realized by allocating different amounts of bandwidth and buffer space to the four AF classes. Unlike
EF, most AF traffic is non-real-time traffic using TCP, and the RED queue management strategy is an AQM (Adaptive Queue Management) strategy suitable for use in AF PHBs. The four AF PHB layers can be implemented as four separate queues. The output port bandwidth is divided into four AF queues. For each AF queue, packets are marked with three “colors” corresponding to three separate priority levels.
In addition to the 32 DSCP 1 groups defined in Table 2.8, 21 DSCPs have been standardized as follows: one for PHB EF, 12 for PHB AF, and 8 for CSCP. There are 11 DSCP 1 groups still available for other standards.
2.2.5.Example of Differentiated Services
We will look at an example of the Differentiated Service model and mechanism of operation. The architecture of Differentiated Service consists of two basic sets of functions:
Edge functions: include packet classification and traffic conditioning. At the inbound edge of the network, incoming packets are marked. In particular, the DS field in the packet header is set to a certain value. For example, in Figure 2.12, packets sent from H1 to H3 are marked at R1, while packets from H2 to H4 are marked at R2. The labels on the received packets identify the service class to which they belong. Different traffic classes receive different services in the core network. The RFC definition uses the term behavior aggregate rather than the term traffic class. After being marked, a packet can be forwarded immediately into the network, delayed for a period of time before being forwarded, or dropped. We will see that there are many factors that affect how a packet is marked, and whether it is forwarded immediately, delayed, or dropped.
Figure 2.12 DiffServ Example
Core functionality: When a DS-marked packet arrives at a Diffservcapable router, the packet is forwarded to the next router based on
Per-hop behavior is associated with packet classes. Per-hop behavior affects router buffers and the bandwidth shared between competing classes. An important principle of the Differentiated Service architecture is that a router's per-hop behavior is based only on the packet's marking or the class to which it belongs. Therefore, if packets sent from H1 to H3 as shown in the figure receive the same marking as packets from H2 to H4, then the network routers treat the packets exactly the same, regardless of whether the packet originated from H1 or H2. For example, R3 does not distinguish between packets from h1 and H2 when forwarding packets to R4. Therefore, the Differentiated Service architecture avoids the need to maintain router state about separate source-destination pairs, which is important for network scalability.
Chapter Conclusion
Chapter 2 has presented and clarified two main models of deploying and installing quality of service in IP networks. While the traditional best-effort model has many disadvantages, later models such as IntServ and DiffServ have partly solved the problems that best-effort could not solve. IntServ follows the direction of ensuring quality of service for each separate flow, it is built similar to the circuit switching model with the use of the RSVP resource reservation protocol. IntSer is suitable for services that require fixed bandwidth that is not shared such as VoIP services, multicast TV services. However, IntSer has disadvantages such as using a lot of network resources, low scalability and lack of flexibility. DiffServ was born with the idea of solving the disadvantages of the IntServ model.
DiffServ follows the direction of ensuring quality based on the principle of hop-by-hop behavior based on the priority of marked packets. The policy for different types of traffic is decided by the administrator and can be changed according to reality, so it is very flexible. DiffServ makes better use of network resources, avoiding idle bandwidth and processing capacity on routers. In addition, the DifServ model can be deployed on many independent domains, so the ability to expand the network becomes easy.
Chapter 3: METHODS TO ENSURE QoS FOR MULTIMEDIA COMMUNICATIONS
In packet-switched networks, different packet flows often have to share the transmission medium all the way to the destination station. To ensure the fair and efficient allocation of bandwidth to flows, appropriate serving mechanisms are required at network nodes, especially at gateways or routers, where many different data flows often pass through. The scheduler is responsible for serving packets of the selected flow and deciding which packet will be served next. Here, a flow is understood as a set of packets belonging to the same priority class, or originating from the same source, or having the same source and destination addresses, etc.
In normal state when there is no congestion, packets will be sent as soon as they are delivered. In case of congestion, if QoS assurance methods are not applied, prolonged congestion can cause packet drops, affecting service quality. In some cases, congestion is prolonged and widespread in the network, which can easily lead to the network being "frozen", or many packets being dropped, seriously affecting service quality.
Therefore, in this chapter, in sections 3.2 and 3.3, we introduce some typical network traffic load monitoring techniques to predict and prevent congestion before it occurs through the measure of dropping (removing) packets early when there are signs of impending congestion.
3.1. DropTail method
DropTail is a simple, traditional queue management method based on FIFO mechanism. All incoming packets are placed in the queue, when the queue is full, the later packets are dropped.
Due to its simplicity and ease of implementation, DropTail has been used for many years on Internet router systems. However, this algorithm has the following disadvantages:
− Cannot avoid the phenomenon of “Lock out”: Occurs when 1 or several traffic streams monopolize the queue, making packets of other connections unable to pass through the router. This phenomenon greatly affects reliable transmission protocols such as TCP. According to the anti-congestion algorithm, when locked out, the TCP connection stream will reduce the window size and reduce the packet transmission speed exponentially.
− Can cause Global Synchronization: This is the result of a severe “Lock out” phenomenon. Some neighboring routers have their queues monopolized by a number of connections, causing a series of other TCP connections to be unable to pass through and simultaneously reducing the transmission speed. After those monopolized connections are temporarily suspended,
Once the queue is cleared, it takes a considerable amount of time for TCP connections to return to their original speed.
− Full Queue phenomenon: Data transmitted on the Internet often has an explosion, packets arriving at the router are often in clusters rather than in turn. Therefore, the operating mechanism of DropTail makes the queue easily full for a long period of time, leading to the average delay time of large packets. To avoid this phenomenon, with DropTail, the only way is to increase the router's buffer, this method is very expensive and ineffective.
− No QoS guarantee: With the DropTail mechanism, there is no way to prioritize important packets to be transmitted through the router earlier when all are in the queue. Meanwhile, with multimedia communication, ensuring connection and stable speed is extremely important and the DropTail algorithm cannot satisfy.
The problem of choosing the buffer size of the routers in the network is to “absorb” short bursts of traffic without causing too much queuing delay. This is necessary in bursty data transmission. The queue size determines the size of the packet bursts (traffic spikes) that we want to be able to transmit without being dropped at the routers.
In IP-based application networks, packet dropping is an important mechanism for indirectly reporting congestion to end stations. A solution that prevents router queues from filling up while reducing the packet drop rate is called dynamic queue management.
3.2. Random elimination method – RED
3.2.1 Overview
RED (Random Early Detection of congestion; Random Early Drop) is one of the first AQM algorithms proposed in 1993 by Sally Floyd and Van Jacobson, two scientists at the Lawrence Berkeley Laboratory of the University of California, USA. Due to its outstanding advantages compared to previous queue management algorithms, RED has been widely installed and deployed on the Internet.
The most fundamental point of their work is that the most effective place to detect congestion and react to it is at the gateway or router.
Source entities (senders) can also do this by estimating end-to-end delay, throughput variability, or the rate of packet retransmissions due to drop. However, the sender and receiver view of a particular connection cannot tell which gateways on the network are congested, and cannot distinguish between propagation delay and queuing delay. Only the gateway has a true view of the state of the queue, the link share of the connections passing through it at any given time, and the quality of service requirements of the
traffic flows. The RED gateway monitors the average queue length, which detects early signs of impending congestion (average queue length exceeding a predetermined threshold) and reacts appropriately in one of two ways:
− Drop incoming packets with a certain probability, to indirectly inform the source of congestion, the source needs to reduce the transmission rate to keep the queue from filling up, maintaining the ability to absorb incoming traffic spikes.
− Mark “congestion” with a certain probability in the ECN field in the header of TCP packets to notify the source (the receiving entity will copy this bit into the acknowledgement packet).
Figure 3. 1 RED algorithm
The main goal of RED is to avoid congestion by keeping the average queue size within a sufficiently small and stable region, which also means keeping the queuing delay sufficiently small and stable. Achieving this goal also helps: avoid global synchronization, not resist bursty traffic flows (i.e. flows with low average throughput but high volatility), and maintain an upper bound on the average queue size even in the absence of cooperation from transport layer protocols.
To achieve the above goals, RED gateways must do the following:
− The first is to detect congestion early and react appropriately to keep the average queue size small enough to keep the network operating in the low latency, high throughput region, while still allowing the queue size to fluctuate within a certain range to absorb short-term fluctuations. As discussed above, the gateway is the most appropriate place to detect congestion and is also the most appropriate place to decide which specific connection to report congestion to.
− The second thing is to notify the source of congestion. This is done by marking and notifying the source to reduce traffic. Normally the RED gateway will randomly drop packets. However, if congestion
If congestion is detected before the queue is full, it should be combined with packet marking to signal congestion. The RED gateway has two options: drop or mark; where marking is done by marking the ECN field of the packet with a certain probability, to signal the source to reduce the traffic entering the network.
− An important goal that RED gateways need to achieve is to avoid global synchronization and not to resist traffic flows that have a sudden characteristic. Global synchronization occurs when all connections simultaneously reduce their transmission window size, leading to a severe drop in throughput at the same time. On the other hand, Drop Tail or Random Drop strategies are very sensitive to sudden flows; that is, the gateway queue will often overflow when packets from these flows arrive. To avoid these two phenomena, gateways can use special algorithms to detect congestion and decide which connections will be notified of congestion at the gateway. The RED gateway randomly selects incoming packets to mark; with this method, the probability of marking a packet from a particular connection is proportional to the connection's shared bandwidth at the gateway.
− Another goal is to control the average queue size even without cooperation from the source entities. This can be done by dropping packets when the average size exceeds an upper threshold (instead of marking it). This approach is necessary in cases where most connections have transmission times that are less than the round-trip time, or where the source entities are not able to reduce traffic in response to marking or dropping packets (such as UDP flows).
3.2.2 Algorithm
This section describes the algorithm for RED gateways. RED gateways calculate the average queue size using a low-pass filter. This average queue size is compared with two thresholds: minth and maxth. When the average queue size is less than the lower threshold, no incoming packets are marked or dropped; when the average queue size is greater than the upper threshold, all incoming packets are dropped. When the average queue size is between minth and maxth, each incoming packet is marked or dropped with a probability pa, where pa is a function of the average queue size avg; the probability of marking or dropping a packet for a particular connection is proportional to the bandwidth share of that connection at the gateway. The general algorithm for a RED gateway is described as follows: [5]
For each packet arrival
Caculate the average queue size avg If minth ≤ avg < maxth
div.maincontent .s1 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 15pt; }
div.maincontent .s2 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .p { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent p { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent .s3 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s4 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s5 { color: black; font-family:"Times New Roman", serif; font-style: italic; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s6 { color: black; font-family:"Times New Roman", serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s7 { color: black; font-family:Wingdings; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s8 { color: black; font-family:Arial, sans-serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .s9 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s10 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 9pt; vertical-align: 6pt; }
div.maincontent .s11 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 13pt; }
div.maincontent .s12 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 10pt; }
div.maincontent .s13 { color: black; font-family:"Times New Roman", serif; font-style: normal; font-weight: normal; text-d -
Factors Affecting Business Performance of Meiko Trading and Construction Company Limited -
Research on factors affecting the decision to purchase gypsum board of Huy An private trading enterprise from institutional customers in Ho Chi Minh City - 13 -
Factors affecting the debt repayment ability of corporate customers at Vietnam Joint Stock Commercial Bank for Investment and Development - Long An Branch - 1
1.6.1.3 Enzymatic hydrolysis

To obtain amino acid preparations, hydrolysis by protease enzymes is widely applied in many different scientific fields, especially more suitable for applications in food and pharmaceuticals because the use of enzymes has many advantages. The biggest advantage of this method is the control of the level of hydrolysis, the mild processing conditions, the obtained product has a high protein content without destroying the amino acid composition (Nguyen Duc Luong, 2004).
1.6.2 Factors affecting the hydrolysis process by protease enzyme
1.6.2.1 Effect of temperature
Due to the nature of enzymes being proteins, the rate of reaction increases with increasing temperature within a certain temperature range without affecting the structure of the enzyme. Each enzyme has a different optimum temperature. This difference depends on the origin of the enzymes, the conditions or the difference in sensitivity to temperature of the protein-enzyme molecule. The suitable temperature of many enzymes is about 40 - 50 o C, enzymes of plant and microbial origin have a higher suitable operating temperature (Pham Thi Tran Chau and Phan Tuan Nghia, 2006).
The optimal operating temperature of an enzyme is not fixed but varies depending on the substrate and hydrolysis time.
1.6.2.2 Effect of pH
Enzyme activity depends on the pH of the environment, because pH affects the ionization state of R groups in amino acids in the enzyme molecule and the substrate. The appropriate pH for enzyme activity is when the enzyme and substrate combine easily. Each enzyme is most active only at a certain pH range, called the optimal pH. The optimal pH of each enzyme is not fixed, it can change depending on the nature, concentration of the substrate and temperature (Pham Thi Tran Chau and Phan Tuan Nghia, 2006).
1.6.2.3 Effect of enzyme concentration and substrate concentration
Enzyme concentration greatly affects enzyme reaction. Under conditions of excess substrate, the reaction rate depends linearly on enzyme concentration.
When the substrate concentration is low, the level of contact between the enzyme and the substrate decreases, so the enzyme reaction decreases. The reaction rate reaches its maximum when all the enzyme is combined with the substrate (Pham Thi Tran Chau and Phan Tuan Nghia, 2006).
1.6.2.4 Effect of hydrolysis time
Hydrolysis time affects the efficiency of the hydrolysis process. The longer the hydrolysis time, the more conditions the protease has to hydrolyze the substrate more thoroughly. However, if the hydrolysis time is too long, it will lead to microorganisms working to produce more secondary products such as NH 3 , H 2 S, CO 2 , indole... On the contrary, if the hydrolysis time is shortened, the protein hydrolysis is not thorough, the hydrolysis efficiency is poor, causing waste of raw materials. Usually, in the beginning, the speed of the hydrolysis process occurs rapidly, later on, due to the decrease in substrate concentration while the concentration of the product increases, at the same time, due to the decrease in enzyme stability over time, the speed of the hydrolysis reaction gradually decreases (Tran Minh Tam, 1998).
1.6.2.5 Effect of contact area
In the process of hydrolysis, the important factor that promotes the hydrolysis process is the contact area. To create better conditions for enzyme hydrolysis is to increase
the ability to contact between enzyme and substrate, to do so, the substrate size must be reduced before hydrolysis (Nguyen Trong Can and Do Minh Phung, 1990).
In addition to the above factors, the hydrolysis process is also affected by other factors such as activators and inhibitors, metal anions, and the nature of the enzyme.
In summary, the hydrolysis process is affected by many factors, so depending on the hydrolyzed material, these factors must be optimized to achieve high hydrolysis efficiency.
1.6.3 Application of hydrolyzed protein
The product of protein hydrolysis is a hydrolyzed protein solution rich in low molecular weight peptides, especially di- and tri-peptides with few free amino acids, which is considered to have high nutritional value (Bhaskar et al., 2007).
Protease enzymes break down muscle protein into soluble and insoluble components. The insoluble components contain undesirable substances and fats that can be used in animal feed. The soluble components contain hydrolyzed proteins and low fat content. Hydrolyzed proteins can be used to enhance food flavor, supplement functional foods, or simply as nutritional additives to low-protein foods (Kurozawa et al., 2008). Fish protein hydrolysates (FPHs) have been successfully tested for incorporation into various food systems such as cereal products, fish and meat products, desserts and crackers, etc. (Kristinsson and Rasco, 2000).
Protein hydrolysates play an important role in animal nutrition, especially in enhancing immune resistance (Pasupuleti et al., 2010). FPHs have been used in aquaculture to enhance the growth and survival of fish (Kotzamanis, 2007). The results showed that peptides in protein hydrolysates affected the growth performance and immunity of seabass larvae. In another study, Nguyen Thi My Huong et al. (2012), conducted a feeding trial to evaluate the effects of tuna head hydrolysate supplementation on the survival and growth of shrimp ( Penaeus vannamei) and reported that tuna head hydrolysate significantly improved both growth and survival of shrimp.
FPHs can be used as a nitrogen source to sustain microbial growth. Ghorbel et al. (2005) used defatted protein hydrolysate from herring ( Sardinella aurita ) as a nitrogen source for extracellular lipase production by the filamentous fungus Rhizopus oryzae and reported higher lipase yields than when no protein hydrolysate was added.
Protein hydrolysates are also used in vaccine production and as plant growth regulators to increase commercial crop yields as well as for weed control in factories (Pasupuleti et al., 2010).
In recent years, there have been many studies on hydrolyzed animal proteins because of the outstanding properties that hydrolyzed proteins bring. Among them are studies on hydrolyzed pork, chicken and fish proteins (Soares et al., 2000; Vercruysse et al., 2005; Bhaskar et al., 2007; Kurozawa et al., 2008; Kurozawa et al., 2009; Rossi et al., 2009; Schmidt and Salas-Mellado, 2009; Silva et al., 2009; Zhang et al., 2010; Di Bernardini et al., 2011; Xijuan et al., 2012; Ha et al., 2013). Compared with beef, pork and chicken, crocodile meat contains less fat and more protein (Hoffman et al., 2000; Beilken et al., 2007). Therefore, crocodile meat is a more suitable raw material for hydrolysis to obtain protein hydrolysate.
Protein hydrolysis can generally be performed in much the same way as other meat proteins. The important thing to research is to find the right enzyme and reaction conditions as outlined above.
After hydrolysis, the liquid can be spray-dried into powder. This process will bring the advantages that the powder product is easy to store and has a longer shelf life, the powder product is easily added to various food products and is suitable for mixing with other ingredients. It can be said that spray-drying the liquid product into powder will increase the application potential of the hydrolyzed product.
1.7 Overview of protease enzymes
Enzyme preparations are produced more and more and are used in most fields such as: food processing, agriculture, animal husbandry, medicine...
1.7.1 General introduction to protease enzymes
Protease enzymes catalyze the hydrolysis of peptide bonds (-CO-NH) n in protein and polypeptide molecules to the final product of amino acids. In addition, many proteases are also capable of hydrolyzing ester bonds and transporting amino acids.
Protease is essential for living organisms, very diverse in function from the cellular, organ to body level, so it is widely distributed in many objects from microorganisms (bacteria, fungi and viruses) to plants (papaya, pineapple...) and animals (liver, calf stomach...). Compared with animal and plant proteases, microbial proteases have different characteristics. First of all, the microbial protease system is a very complex system consisting of many enzymes that are very similar in structure, mass and molecular shape, so it is very difficult to separate them in the form of homogeneous crystals.
Also because it is a complex of many different enzymes, microbial proteases often have broad specificity for thorough and diverse hydrolysis products.
1.7.2 Classification of proteases
Proteases (peptidases) belong to subclass 4 of class 3 (EC3.4). Proteases are divided into two types: endopeptidases and exopeptidases.
Based on the site of action on the polypeptide chain, exopeptidases are divided into two types:
+ Aminopeptidase: catalyzes the hydrolysis of peptide bonds at the free N-terminus of the polypeptide chain to release an amino acid, a dipeptide or a tripeptide.
+ Carboxypeptidase: catalyzes the hydrolysis of peptide bonds at the C-terminus of the polypeptide chain and releases an amino acid or a dipeptide.
Based on the kinetics of the catalytic mechanism, endopeptidases are divided into four groups:
+ Serine proteinase: are proteinases containing the –OH group of the serine radical in the active site and play a particularly important role in the catalytic activity of enzymes. This group includes two subgroups: chymotrypsin and subtilisin. The chymotrypsin group includes animal enzymes such as chymotrypsin, trypsin, elastase. The subtilisin group includes two types of bacterial enzymes such as Carlsberg subtilisin,
subtilisin BPN. Serine proteinases are typically highly active in the alkaline region and exhibit relatively broad substrate specificity.
+ Cysteine proteinase: proteinases containing a –SH group in the active site. Cysteine proteinases include plant proteinases such as papain, bromelin, some animal proteins and parasitic proteinases. Cysteine proteinases usually operate in the neutral pH range and have broad substrate specificity.
+ Aspartic proteinase: most aspartic proteinases belong to the pepsin group. The pepsin group includes digestive enzymes such as pepsin, chymosin, cathepsin, renin. Aspartic proteinases contain a carboxyl group in the active site and are often active at neutral pH.
+ Metallo proteinase: is a group of proteinases found in bacteria, molds as well as higher microorganisms. Metallo proteinases usually operate in the neutral pH region and their activity is greatly reduced under the effect of EDTA.
In addition, based on pH, protease activity is classified into three groups: Acid protease: pH 2-4; Neutral protease: pH 7-8; Alkaline protease: pH 9-11 (Nguyen Trong Can et al., 1998).
1.7.3 Catalytic mechanism of protease enzyme
Although the active sites of microbial proteases are different, they all catalyze peptide bond hydrolysis reactions according to the same general mechanism as follows:
E + S → ES → ES' + P 1 → E + P 2
In there:
E: Enzyme S: Substrate
ES: Enzyme-substrate complex
ES': Enzyme-Acylation Substrate Intermediate Complex (Acyl Enzyme)
P 1 : First product of the reaction chain (newly formed free amino group)
` P 2 : Second product of the reaction chain (newly formed free carboxyl group ) (According to Nguyen Van Mui, 2012).
1.7.4 Protease enzyme hydrolysis method
xt
xt
Protein
xt Polypeptide
H 2 O H 2 O
Peptides
H 2 O
Amino acid
Figure 1.5 : Intermediate products of protein hydrolysis
Protein hydrolysis is the process of breaking down protein chains at peptide bonds into intermediate products such as polypeptides, peptides and final products such as amino acid molecules (Bhaskar et al., 2007; McCarthy et al., 2013).
Currently, protein hydrolysis can be carried out by NaOH or by protease enzyme system. However, in the food industry, protein hydrolysis by protease enzyme system is often used due to high hydrolysis efficiency and better quality of hydrolysate compared to hydrolysis by NaOH. Alkaline hydrolysis is not used because it creates racemization phenomenon which reduces the nutritional value of amino acids.
1.8 Research status on protein hydrolysis using enzymes
1.8.1 In the world
In recent years, there have been many studies on hydrolyzed animal proteins because of the outstanding properties that hydrolyzed proteins bring. Including studies on hydrolyzed proteins of pork, chicken, fish, mussels, squid, by-products of seafood processing industry...
According to Zhuang et al. (2009), the optimization of antioxidant activity by response surface methodology in jellyfish collagen hydrolysate ( Rhopilema esculentum) was studied . To optimize the conditions for jellyfish collagen hydrolysis with the highest hydroxyl radical scavenging activity, collagen extracted from jellyfish was hydrolyzed with Trypsin enzyme. The optimal conditions obtained from the experiment were pH 7.75, temperature 48.77 o C and enzyme to substrate ratio 3.50%. Analysis of variance in response surface methodology showed that pH and enzyme to substrate ratio were important factors that significantly affected the process (p < 0.05 and p < 0.01, respectively). The hydrolysate of jellyfish collagen was separated by HPLC and three components (HF-1 > 3000 Da, 1000
HF-2 < 3000 Da and HF-3 < 1000 Da) were obtained. HF-2 had the highest hydroxyl radical scavenging activity with the highest yield compared to the other two components.
According to Fang et al. (2012), the optimization of antioxidant hydrolysate production from squid muscle protein was studied using response surface methodology. Squid muscle protein, extracted from squid products ( Ommastrephes bartrami ) was hydrolyzed by five protease enzymes (pepsin, trypsin, papain, alcalase and flavourzyme). DPPH free radical scavenging ability was used to evaluate the antioxidant activity of the hydrolysates. The results showed that the hydrolysates obtained with papain had the highest antioxidant activity. Response surface methodology was used to optimize the hydrolysis process conditions, including enzyme/substrate ratio (1 - 2%), reaction temperature (45 - 55 o C) and hydrolysis time (30 - 60 min). The optimum conditions obtained were as follows: enzyme/substrate ratio 1.74%, temperature 51 o C, time 46 min, accordingly, the DPPH free radical scavenging activity was 74.25%.
According to Kurozawa et al. (2008), they studied the optimization of protein hydrolysis from chicken meat using 2.4L alcalase enzyme and evaluated the effects of temperature (43 - 77 o C), enzyme / substrate ratio (0.8 - 4.2%) and pH (7.16 - 8.84) on the degree of hydrolysis and protein recovery. The enzymatic hydrolysis process was optimized for the maximum degree of protein hydrolysis and recovery. The results showed that the optimal conditions were determined as follows: temperature 52.5 o C, enzyme / substrate ratio 4.2% and pH 8. Under these conditions, the degree of hydrolysis obtained was 31% and the protein recovery was 91%. SDS-PAGE electrophoresis at 12% separating gel concentration and 4% collecting gel concentration showed that some protein bonds in the meat were cleaved after hydrolysis and the content of glutamic acid, aspartic acid, lysine and leucine was high.
Silva et al. (2009) studied the optimization of mussel meat protein hydrolysis using Protamex enzyme. The relationship of temperature (46 - 64 o C), enzyme / substrate ratio (0.48 - 5.52%) and pH (6.7 - 8.3) to the degree of hydrolysis was determined. Response surface methodology showed that the optimal conditions for enzymatic hydrolysis of mussel meat were pH 6.85; temperature 51 o C and enzyme / substrate ratio 4.5%. Under these conditions, the degree of hydrolysis obtained was 26.5% and the protein recovery rate was 65%.


![Qos Assurance Methods for Multimedia Communications
zt2i3t4l5ee
zt2a3gs
zt2a3ge
zc2o3n4t5e6n7ts
low. The EF PHB requires a sufficiently large number of output ports to provide low delay, low loss, and low jitter.
EF PHBs can be implemented if the output ports bandwidth is sufficiently large, combined with small buffer sizes and other network resources dedicated to EF packets, to allow the routers service rate for EF packets on an output port to exceed the arrival rate λ of packets at that port.
This means that packets with PHB EF are considered with a pre-allocated amount of output bandwidth and a priority that ensures minimum loss, minimum delay and minimum jitter before being put into operation.
PHB EF is suitable for channel simulation, leased line simulation, and real-time services such as voice, video without compromising on high loss, delay and jitter values.
Figure 2.10 Example of EF installation
Figure 2.10 shows an example of an EF PHB implementation. This is a simple priority queue scheduling technique. At the edges of the DS domain, EF packet traffic is prioritized according to the values agreed upon by the SLA. The EF queue in the figure needs to output packets at a rate higher than the packet arrival rate λ. To provide an EF PHB over an end-to-end DS domain, bandwidth at the output ports of the core routers needs to be allocated in advance to ensure the requirement μ > λ. This can be done by a pre-configured provisioning process. In the figure, EF packets are placed in the priority queue (the upper queue). With such a length, the queue can operate with μ > λ.
Since EF was primarily used for real-time services such as voice and video, and since real-time services use UDP instead of TCP, RED is generally
not suitable for EF queues because applications using UDP will not respond to random packet drop and RED will strip unnecessary packets.
2.2.4.2 Assured Forwarding (AF) PHB
PHB AF is defined by RFC 2597. The purpose of PHB AF is to deliver packets reliably and therefore delay and jitter are considered less important than packet loss. PHB AF is suitable for non-real-time services such as applications using TCP. PHB AF first defines four classes: AF1, AF2, AF3, AF4. For each of these AF classes, packets are then classified into three subclasses with three distinct priority levels.
Table 2.8 shows the four AF classes and 12 AF subclasses and the DSCP values for the 12 AF subclasses defined by RFC 2597. RFC 2597 also allows for more than three separate priority levels to be added for internal use. However, these separate priority levels will only have internal significance.
PHB Class
PHB Subclass
Package type
DSCP
AF4
AF41
Short
100010
AF42
Medium
100100
AF43
High
100110
AF3
AF31
Short
011010
AF32
Medium
011100
AF33
High
011110
AF2
AF21
Short
010010
AF22
Medium
010100
AF23
High
010110
AF1
AF11
Short
001010
AF12
Medium
001100
AF13
High
001110
Table 2.8 AF DSCPs
The AF PHB ensures that packets are forwarded with a high probability of delivery to the destination within the bounds of the rate agreed upon in an SLA. If AF traffic at an ingress port exceeds the pre-priority rate, which is considered non-compliant or “out of profile”, the excess packets will not be delivered to the destination with the same probability as the packets belonging to the defined traffic or “in profile” packets. When there is network congestion, the out of profile packets are dropped before the in profile packets are dropped.
When service levels are defined using AF classes, different quantity and quality between AF classes can be realized by allocating different amounts of bandwidth and buffer space to the four AF classes. Unlike
EF, most AF traffic is non-real-time traffic using TCP, and the RED queue management strategy is an AQM (Adaptive Queue Management) strategy suitable for use in AF PHBs. The four AF PHB layers can be implemented as four separate queues. The output port bandwidth is divided into four AF queues. For each AF queue, packets are marked with three “colors” corresponding to three separate priority levels.
In addition to the 32 DSCP 1 groups defined in Table 2.8, 21 DSCPs have been standardized as follows: one for PHB EF, 12 for PHB AF, and 8 for CSCP. There are 11 DSCP 1 groups still available for other standards.
2.2.5.Example of Differentiated Services
We will look at an example of the Differentiated Service model and mechanism of operation. The architecture of Differentiated Service consists of two basic sets of functions:
Edge functions: include packet classification and traffic conditioning. At the inbound edge of the network, incoming packets are marked. In particular, the DS field in the packet header is set to a certain value. For example, in Figure 2.12, packets sent from H1 to H3 are marked at R1, while packets from H2 to H4 are marked at R2. The labels on the received packets identify the service class to which they belong. Different traffic classes receive different services in the core network. The RFC definition uses the term behavior aggregate rather than the term traffic class. After being marked, a packet can be forwarded immediately into the network, delayed for a period of time before being forwarded, or dropped. We will see that there are many factors that affect how a packet is marked, and whether it is forwarded immediately, delayed, or dropped.
Figure 2.12 DiffServ Example
Core functionality: When a DS-marked packet arrives at a Diffservcapable router, the packet is forwarded to the next router based on
Per-hop behavior is associated with packet classes. Per-hop behavior affects router buffers and the bandwidth shared between competing classes. An important principle of the Differentiated Service architecture is that a routers per-hop behavior is based only on the packets marking or the class to which it belongs. Therefore, if packets sent from H1 to H3 as shown in the figure receive the same marking as packets from H2 to H4, then the network routers treat the packets exactly the same, regardless of whether the packet originated from H1 or H2. For example, R3 does not distinguish between packets from h1 and H2 when forwarding packets to R4. Therefore, the Differentiated Service architecture avoids the need to maintain router state about separate source-destination pairs, which is important for network scalability.
Chapter Conclusion
Chapter 2 has presented and clarified two main models of deploying and installing quality of service in IP networks. While the traditional best-effort model has many disadvantages, later models such as IntServ and DiffServ have partly solved the problems that best-effort could not solve. IntServ follows the direction of ensuring quality of service for each separate flow, it is built similar to the circuit switching model with the use of the RSVP resource reservation protocol. IntSer is suitable for services that require fixed bandwidth that is not shared such as VoIP services, multicast TV services. However, IntSer has disadvantages such as using a lot of network resources, low scalability and lack of flexibility. DiffServ was born with the idea of solving the disadvantages of the IntServ model.
DiffServ follows the direction of ensuring quality based on the principle of hop-by-hop behavior based on the priority of marked packets. The policy for different types of traffic is decided by the administrator and can be changed according to reality, so it is very flexible. DiffServ makes better use of network resources, avoiding idle bandwidth and processing capacity on routers. In addition, the DifServ model can be deployed on many independent domains, so the ability to expand the network becomes easy.
Chapter 3: METHODS TO ENSURE QoS FOR MULTIMEDIA COMMUNICATIONS
In packet-switched networks, different packet flows often have to share the transmission medium all the way to the destination station. To ensure the fair and efficient allocation of bandwidth to flows, appropriate serving mechanisms are required at network nodes, especially at gateways or routers, where many different data flows often pass through. The scheduler is responsible for serving packets of the selected flow and deciding which packet will be served next. Here, a flow is understood as a set of packets belonging to the same priority class, or originating from the same source, or having the same source and destination addresses, etc.
In normal state when there is no congestion, packets will be sent as soon as they are delivered. In case of congestion, if QoS assurance methods are not applied, prolonged congestion can cause packet drops, affecting service quality. In some cases, congestion is prolonged and widespread in the network, which can easily lead to the network being frozen, or many packets being dropped, seriously affecting service quality.
Therefore, in this chapter, in sections 3.2 and 3.3, we introduce some typical network traffic load monitoring techniques to predict and prevent congestion before it occurs through the measure of dropping (removing) packets early when there are signs of impending congestion.
3.1. DropTail method
DropTail is a simple, traditional queue management method based on FIFO mechanism. All incoming packets are placed in the queue, when the queue is full, the later packets are dropped.
Due to its simplicity and ease of implementation, DropTail has been used for many years on Internet router systems. However, this algorithm has the following disadvantages:
− Cannot avoid the phenomenon of “Lock out”: Occurs when 1 or several traffic streams monopolize the queue, making packets of other connections unable to pass through the router. This phenomenon greatly affects reliable transmission protocols such as TCP. According to the anti-congestion algorithm, when locked out, the TCP connection stream will reduce the window size and reduce the packet transmission speed exponentially.
− Can cause Global Synchronization: This is the result of a severe “Lock out” phenomenon. Some neighboring routers have their queues monopolized by a number of connections, causing a series of other TCP connections to be unable to pass through and simultaneously reducing the transmission speed. After those monopolized connections are temporarily suspended,
Once the queue is cleared, it takes a considerable amount of time for TCP connections to return to their original speed.
− Full Queue phenomenon: Data transmitted on the Internet often has an explosion, packets arriving at the router are often in clusters rather than in turn. Therefore, the operating mechanism of DropTail makes the queue easily full for a long period of time, leading to the average delay time of large packets. To avoid this phenomenon, with DropTail, the only way is to increase the routers buffer, this method is very expensive and ineffective.
− No QoS guarantee: With the DropTail mechanism, there is no way to prioritize important packets to be transmitted through the router earlier when all are in the queue. Meanwhile, with multimedia communication, ensuring connection and stable speed is extremely important and the DropTail algorithm cannot satisfy.
The problem of choosing the buffer size of the routers in the network is to “absorb” short bursts of traffic without causing too much queuing delay. This is necessary in bursty data transmission. The queue size determines the size of the packet bursts (traffic spikes) that we want to be able to transmit without being dropped at the routers.
In IP-based application networks, packet dropping is an important mechanism for indirectly reporting congestion to end stations. A solution that prevents router queues from filling up while reducing the packet drop rate is called dynamic queue management.
3.2. Random elimination method – RED
3.2.1 Overview
RED (Random Early Detection of congestion; Random Early Drop) is one of the first AQM algorithms proposed in 1993 by Sally Floyd and Van Jacobson, two scientists at the Lawrence Berkeley Laboratory of the University of California, USA. Due to its outstanding advantages compared to previous queue management algorithms, RED has been widely installed and deployed on the Internet.
The most fundamental point of their work is that the most effective place to detect congestion and react to it is at the gateway or router.
Source entities (senders) can also do this by estimating end-to-end delay, throughput variability, or the rate of packet retransmissions due to drop. However, the sender and receiver view of a particular connection cannot tell which gateways on the network are congested, and cannot distinguish between propagation delay and queuing delay. Only the gateway has a true view of the state of the queue, the link share of the connections passing through it at any given time, and the quality of service requirements of the
traffic flows. The RED gateway monitors the average queue length, which detects early signs of impending congestion (average queue length exceeding a predetermined threshold) and reacts appropriately in one of two ways:
− Drop incoming packets with a certain probability, to indirectly inform the source of congestion, the source needs to reduce the transmission rate to keep the queue from filling up, maintaining the ability to absorb incoming traffic spikes.
− Mark “congestion” with a certain probability in the ECN field in the header of TCP packets to notify the source (the receiving entity will copy this bit into the acknowledgement packet).
Figure 3. 1 RED algorithm
The main goal of RED is to avoid congestion by keeping the average queue size within a sufficiently small and stable region, which also means keeping the queuing delay sufficiently small and stable. Achieving this goal also helps: avoid global synchronization, not resist bursty traffic flows (i.e. flows with low average throughput but high volatility), and maintain an upper bound on the average queue size even in the absence of cooperation from transport layer protocols.
To achieve the above goals, RED gateways must do the following:
− The first is to detect congestion early and react appropriately to keep the average queue size small enough to keep the network operating in the low latency, high throughput region, while still allowing the queue size to fluctuate within a certain range to absorb short-term fluctuations. As discussed above, the gateway is the most appropriate place to detect congestion and is also the most appropriate place to decide which specific connection to report congestion to.
− The second thing is to notify the source of congestion. This is done by marking and notifying the source to reduce traffic. Normally the RED gateway will randomly drop packets. However, if congestion
If congestion is detected before the queue is full, it should be combined with packet marking to signal congestion. The RED gateway has two options: drop or mark; where marking is done by marking the ECN field of the packet with a certain probability, to signal the source to reduce the traffic entering the network.
− An important goal that RED gateways need to achieve is to avoid global synchronization and not to resist traffic flows that have a sudden characteristic. Global synchronization occurs when all connections simultaneously reduce their transmission window size, leading to a severe drop in throughput at the same time. On the other hand, Drop Tail or Random Drop strategies are very sensitive to sudden flows; that is, the gateway queue will often overflow when packets from these flows arrive. To avoid these two phenomena, gateways can use special algorithms to detect congestion and decide which connections will be notified of congestion at the gateway. The RED gateway randomly selects incoming packets to mark; with this method, the probability of marking a packet from a particular connection is proportional to the connections shared bandwidth at the gateway.
− Another goal is to control the average queue size even without cooperation from the source entities. This can be done by dropping packets when the average size exceeds an upper threshold (instead of marking it). This approach is necessary in cases where most connections have transmission times that are less than the round-trip time, or where the source entities are not able to reduce traffic in response to marking or dropping packets (such as UDP flows).
3.2.2 Algorithm
This section describes the algorithm for RED gateways. RED gateways calculate the average queue size using a low-pass filter. This average queue size is compared with two thresholds: minth and maxth. When the average queue size is less than the lower threshold, no incoming packets are marked or dropped; when the average queue size is greater than the upper threshold, all incoming packets are dropped. When the average queue size is between minth and maxth, each incoming packet is marked or dropped with a probability pa, where pa is a function of the average queue size avg; the probability of marking or dropping a packet for a particular connection is proportional to the bandwidth share of that connection at the gateway. The general algorithm for a RED gateway is described as follows: [5]
For each packet arrival
Caculate the average queue size avg If minth ≤ avg < maxth
div.maincontent .s1 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 15pt; }
div.maincontent .s2 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .p { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent p { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; margin:0pt; }
div.maincontent .s3 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s4 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s5 { color: black; font-family:Times New Roman, serif; font-style: italic; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s6 { color: black; font-family:Times New Roman, serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s7 { color: black; font-family:Wingdings; font-style: normal; font-weight: normal; text-decoration: none; font-size: 14pt; }
div.maincontent .s8 { color: black; font-family:Arial, sans-serif; font-style: italic; font-weight: bold; text-decoration: none; font-size: 15pt; }
div.maincontent .s9 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: bold; text-decoration: none; font-size: 14pt; }
div.maincontent .s10 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 9pt; vertical-align: 6pt; }
div.maincontent .s11 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 13pt; }
div.maincontent .s12 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-decoration: none; font-size: 10pt; }
div.maincontent .s13 { color: black; font-family:Times New Roman, serif; font-style: normal; font-weight: normal; text-d](https://tailieuthamkhao.com/uploads/2022/05/15/danh-gia-hieu-qua-dam-bao-qos-cho-truyen-thong-da-phuong-tien-cua-chien-6-1-120x90.jpg)


