To achieve the condition of marketable products and useful to the society, the crude oil needs to pass by processing steps aiming to add value through separation and conversion processes. The entrance gate of the crude oil in a refinery is the crude oil distillation unit that aims to separate the crude into process streams which, after adequate treatment, will be commercialized like derivatives as transportation fuels, petrochemical intermediates, etc. Figure 1 presents a scheme of a typical atmospheric crude oil distillation unit.
Figure 1 – Typical Atmospheric Crude Oil Distillation Unit
The bottom stream of the atmospheric column (Atmospheric Residue) still contains recoverable products capable to be converted into high added value derivatives, however, under the process conditions of the atmospheric unit, the additional heating lead to thermal cracking and coke deposition.
Aiming to minimize this effect, the atmospheric residue is pumped to the vacuum distillation column where the pressure reduction leads to a reduction in the boiling point of the heavy fractions allowing the recovery while minimizing the thermal cracking process. Figure 2 shows a typical process arrangement for a vacuum crude oil distillation unit dedicated to producing intermediates streams to transportation fuels.
Figure 2 – Vacuum Crude Oil Distillation to Transportation Fuels Production
The heavy and light gasoil streams are normally directed to conversion units like hydrocracking or fluid catalytic cracking (FCC), according to the adopted refining scheme. The fractionating quality achieved in the crude oil vacuum distillation column has a direct impact upon the reliability and conversion units operation lifecycle, once which in this step is controlled the metals content and the residual carbon (CCR) concentration in the feedstock to these processes, high values of these parameters lead to a quickly catalyst deactivation raising operational costs and reducing profitability.
The vacuum generated in the column can be humid, semi-humid and dry. Humid vacuum occurs when is applied steam injection in the fired heater and in the column aiming to reduce the partial pressure of the hydrocarbons improving the recovery while in the semi-humid vacuum the steam is injected only in the fired heater minimizing the residence time reducing the coke deposition. The dry vacuum does not involve the steam injection, in this case, is possible to achieve pressures between 20 to 8 mmHg while in the humid vacuum the column operates under pressures varying between 40 to 80 mmHg, however, it’s possible to achieve comparable yields through the injection of stripping steam. Figure 3 presents a process arrangement for a typical vacuum generation system in a vacuum crude oil distillation unit.
Figure 3 – Process Arrangement for a Typical Vacuum Generation System for a Vacuum Crude Oil Distillation
Some refiners include additional side withdraws in the vacuum distillation column. When the objective is to maximize the diesel production, it’s possible to add a withdraw of a stream lighter than light vacuum gasoil that can be directly added to the diesel pool or after hydrotreating, according to the sulfur content in the processed crude oil. When the crude oil presents high metals content, it’s possible to include a withdraw of fraction heavier than the heavy gasoil called residual gasoil or slop cut, this additional cut concentrates the metals in this stream and reduce the residual carbon in the heavy gasoil, minimizing the deactivation process of the conversion processes catalysts as aforementioned. Normally, the residual gasoil is applied as the diluent to produce asphalt or fuel oil.
When the refinery is focused to produce lubricants, the vacuum column has better fractionating quality while in the column dedicated to producing fuels the internals are designed mainly to promote the heat exchange between the streams. The better fractionation in the case of lubricants is due to the necessity to produce the lubricants cuts, as presented in Figure 4.
Figure 4 – Process Arrangement for a Vacuum Column to Produce Lubricants
Vacuum residue is normally directed to the asphalt production or, in refineries with higher conversion capacity, to bottom barrel conversion units like delayed coking and solvent deasphalting aiming to improve the yield of high added value derivatives.
According to the refining scheme, the installation of vacuum distillation units can be dispensed. Refiners that rely on residue fluid catalytic cracking units (RFCC) can sent the atmospheric residue directly to feed stream of these units, however, it’s necessary to control the contaminants content (metals, sulfur, nitrogen, etc.) and residual carbon (CCR) aiming to protect the catalyst, this fact restricts the crude oil slate that can be processed, reducing the refiner operational flexibility. On the other hand, in refineries that process extra heavy crudes, normally the crude oil distillation unit is restricted to the vacuum unit once the yields of the atmospheric column would be very low and the coking risk very high.
The processing of residual streams and the residue upgrading have key role to the economical performance of the downstream industry and this protagonism trends to grow after 2020 due to the start of IMO 2020 that establishes the reduction of the sulfur content in the bunker (Marine Fuel Oil) from the current 3,5 % (m.m) to 0,5 % (m.m), this regulation should restrict the use of high contaminants content streams as diluents to the production of fuel oils like adopted nowadays, this fact would lead to apply high added value streams (diesel, as example) as diluent which can pressure the refiners profitability, in this scenario refineries with higher complexity should have competitive advantage over the competitors. This fact can lead these refiners to carry out capital investments aiming to improve his bottom barrel conversion capacity.
SPEIGHT, J.G. Heavy and Extra-Heavy Oil Upgrading Technologies. 1st ed. Elsevier Press, 2013.
Advanced control strategies are often organized hierarchically, with multi variable controllers that provide the set-points to low-level controllers, which are typically of the proportional-integral-derivative (PID) type. Thus, it has to be recognized that the overall process performance relies in any case on the performance of the PID controllers. In fact, despite the presence of many effective automatic tuning methodologies based on identification methods suitable for being applied in industry, in many practical cases, PID controllers are poorly tuned, because of the lack of time/operator skill or because of operating conditions changes. Obviously, especially in large plants where there are hundreds of control loops, it is important to have techniques able to automatically assess the performance of a PID controller and, in case it is not satisfactory, to retune the controller in order to optimize the performance. Some features are particularly appreciated in this context:
Employment of routine operation data, so that no special (possibly time and energy consuming) experiments are needed
Demanding low computational effort, so that it can be applied to hundreds control loops without significantly affecting the controllers CPU/memory (no complex identification methods requiring large array/matrix operations)
Capability to address both setpoint following (r(t), in figure 1) and load disturbance rejection(d(t) in figure 1)
Robustness to the measurement noise, typical in the industrial applications
FIGURE 1. SINGLE LOOP CONTROL
Simplified first order or integrating plus dead-time (FOPDT, IPDT) models are representative of 90% of the dynamics in the process industry; furthermore, it can represent a reasonable approximation for higher order dynamics thanks to the well-known so-called “half rule”, according to which the largest neglected (denominator) time constant is distributed evenly to the effective dead time and the smallest retained time constant.
The model parameters (gain, lag, dead-time) can be estimated in many ways, typically already implemented in some function blocks available in the DCS libraries. Alternatively, when it’s worth saving the CPU memory and computation load, one single function block can be created for estimating the parameters of any PID Tagname of which is supplied as an input to it. An example of such kind of function block is reported in the next section, based on the theory presented in the reference paper.
For such a simplified FOPDT model the integral of the absolute error (or deviation, i.e. the difference between the process variable and the reference signal) at the end of the transient time after a setpoint step change can be analytically obtained asIAE=As (θ+λ), where As is the setpoint step amplitude, θ the process dead time and λ the desired time constant of the closed loop response, which is reasonable to be expected not lower than θ. Therefore the performance of the setpoint following task can be evaluated by the index SFPI indicated in Table 1.
With regards to the load disturbance rejection the target IAE has to be set as the one achievable through a tuning rule specifically designed for this task. In the referenced papers it is shown how such a worth choice leads to the performance index LRPIindicated in Table 2.
The proposed algorithm has been applied to the temperature control loop shown in Figure 2 as TIC3206. The plant is dedicated to the production of energy from renewable sources, in particular by using palm oil as a fuel. The control task consists of keeping the palm oil pipes at the required (warm) temperature to avoid its solidification, which would cause serious damage to the plant. During routine operations, the system has to keep the steady-state value, but during the start-up phase, the controller must follow a set-point step signal effectively. One function block has been developed for performing the process parameters estimation on the PID Tag-name that is passed to it as input variable. The core of the computation is expressed by the following code:
It is worth underlining that the model parameters have been obtained making use of integral variables, which can be incrementally computed (no arrays in the memory) and are inherently robust to measurement noise.
The controller was initially tuned with a proportional band PB = 70% (note that the proportional band is equal to 100/Kp) and an integral time constant equal to Ti = 70 (s) (Td = 0). After the application of the step signal to the set-point, the process parameters have been determined as delay = 94 (s), gain = 0.285 and 184.3 s as the sum of lags and delay. The corresponding values of SFPI has been determined as 0.545, indicating the need for a retuning. After retuning, the new values of the PID controller have been determined as PB = 82.78%, Ti = 64.82 (s) and Td = 24.45 (s), with a corresponding value of SFPI = 0.973. The set-point step responses before and after the retuning procedure are shown in Figure 3, where a clear improvement in the performance appears (note the different time range in the two figures). In particular, the settling time has been considerably reduced, which is obviously appreciated in the start-up phase.
FIGURE 2. OVERVIEW OF PART OF THE RENEWABLE ENERGY PLANT USED FOR EXPERIMENTAL RESULTS.
FIGURE 3. SET-POINT STEP RESPONSES IN THE TEMPERATURE CONTROL LOOP. LEFT: INITIAL; RIGHT: RETUNED.
Methodologies for the deterministic performance assessment and retuning of PID controllers have been reviewed and presented in a unified way in this paper. The basic idea, developed for different contexts, is to exploit the final value theorem to estimate the process parameters based on the integral of appropriate signals that results from a set-point or a load disturbance step response. This makes the technique suitable for implementing in industry, as it uses routine operating data, and is inherently robust to measurement noise and the result is almost independent of the tuning of the initial controller (on the contrary, standard least squares techniques assume an input signal that significantly excites the dynamics of the system to be estimated). The methodologies analyzed can be implemented with standard Distributed Control Systems software and can also be extended to more complex control techniques, like cascade control, dead time compensators and feedforward control.
Despite the enhanced multivariable control available algorithms, which computation complexity typically needs to be implemented at upper level, PID control is still the primary component of any basic loop and through clever PID-based architectures many “DCS-enabled” solutions can be built up and standardized in the industry. Therefore PID control will still be used for long time; its knowledge will still be a key for any control/process engineer, and any contribution for improving the effectiveness of PID control will always be welcome both in research and in the industry.
Since 1980, Simulation Solutions and its predecessor, Atlantic Simulation, pioneered the field of microcomputer process training simulators by using low cost, readily available computer hardware. Since then, the power and speed of these low cost machines have substantially increased, allowing us to exploit the tremendous capabilities of today’s technology.
From the beginning, their focus has centered around cost effective simulator packages utilizing a wide range of available Standard Process Models and Emulated Distributed Control System (DCS) operator training stations. Low cost hardware solutions coupled with readily available Standard Process Models permits clients to implement simulator technology in the face of budget constraints. Today, advanced programming methods allow Simulation Solutions to stress model fidelity and control system realism that others find difficult to duplicate.
In 2001, Simulation Solutions created a major breakthrough in simulator training technology. Our new Hands On Training System contains a broad range of high fidelity process models and realistic DCS system emulations which have been integrated into a network based, fully automated training system that includes detailed training exercises, comprehensive on-line help, self and graded evaluations, and the recording of test scores and results.
Clients using this newly developed technology are able to deliver hands on training on shift and around the clock. Automated system frees up available simulator Instructors for more coaching and advising, and/or allows for complete self-paced, self-study and practice. Immediately after taking a classroom or network based course on theory, operators can see this theory put into action through the use of dynamic and realistic simulator training exercises.
Operator Training Simulators
Simulation Solutions offers a wide variety of Process Simulators which include both a DCS component and a Virtual Reality Outside Operator. This Outside Operator is fully integrated with the DCS side of the Simulator. Trainees will explore the Virtual Reality Outside Operator and be able to operate all pieces of equipment that are represented on the DCS. Actions performed in the Outside Operator (Opening of Valves, Pumps, Controllers) are reflected in real time on the DCS schematics, and vice versa.
Some of their most popular Modules as below :
Often utilized as a “warm-up” for our Distillation Simulator, a Flash Drum separates a binary feed made up of butane and hexane. The Feed is under flow control and the temperature is controlled with a hot oil exchanger. The temperature controller and overhead pressure controller determine the compositions of the overhead and bottoms streams. A mass balance of the two components is displayed in mass per time units to allow trainees to develop a clear understanding of mass balance. A Virtual Reality Outside Operator accompanies the program.
Pump & Valve
Water enters a series of tanks in a unit that features feed, level and pressure controllers. Specifically, the pressure on each tank is controlled using a split-range controller utilizing a nitrogen blanket, and vent valve. The first tank in the system features a level controller managing feed entering the tank, while the second tank in the system has a level controller managing feed out of the tank. A series of pumps can be swung during exercises. A Virtual Reality Outside Operator accompanies the program.
A company in the oil industry found that whilst startup, shutdown, and changeover periods account for less than 5% of operations staff time, 40% of plant incidents occur during this time (NPRA 2009 National Safety Conference). In fact, every 2nd incident or accident in the process industry is related to communication errors that occurred during shift handovers.
When equipment needs to change operating mode, this gives way to a rise in risk – when operations staff change over to a new shift, the potential knowledge-gaps that result from the change give rise to risk as well. Ensuring that shift handovers are conducted in a clear and concise manner is one of the most important components in mitigating risk during periods of change.
Poorly written notes and/or technical misunderstandings are the root cause to these major issues. In this series we will look at some of the steps that we, as an industry, can take to move forward in improving the 5% to 40% figure. I believe that substantial improvements can be made by following these simple steps. Isn’t this something that we can work towards together?
Effective Knowledge Transfer
Shift handover is effectively the transfer of knowledge from the outgoing staff member to an incoming staff member; typically thought to be a unidirectional process in which the outgoing operator decides which information is of importance for transferring, so that the incoming staff can effectively operate the facility.
Whilst it may seem reasonable for shift handover to be conducted in a unidirectional nature, research from Ronald Lardner (The Kiel Centre, 1999) shows that when communications are conducted in a bi-directional or repetitive nature (questioning, validating and repeating each handover item), the confidence and accuracy of the transferred knowledge is substantially improved.
The reasoning behind bi-directional communication during shift handover and why it shows improvement is that it creates alignment in the “Mental Model” of both operators. By aligning their mental understanding, the gaps in overall understanding are closed. In having a common understanding, there is less room for information to fall through the cracks.
I would like to start this series by focusing on Mental Modelling; what it is and its impact on shift handover. This approach is meant to explore ways that we, as administrators, can leverage the knowledge of our mental model (as well as our colleagues’) for plant safety and overall improvement.
In subsequent articles we will look into the ways of properly structuring shift handover reports, and conclude the series by looking at the best practices for shift handover from the perspective of regulatory inspectors.
What are Mental Models?
Mental Models are defined as the organized models or structures of reality that enable us to understand, reason and/or interpret events. Philip N. Johnson describes Mental Models from an industrial perspective as:
“Organized knowledge structures that operators develop to understand and explain their experiences representing a specific task or knowledge domain.” ( Johnson 2001)
In other words, a typical refinery operator will have his/her personal Mental Model for their respective process unit; based on their unique and personal experiences spanning over several years. A persons mental model is constantly evolving. Each and every time an operator speaks with colleagues or takes action at the facility, this is enhancing, adjusting and updating the operators mental model.
“(Operators) do not blindly follow a sequence of instructions, but rather visualize the current process flow and initiate systematic actions in close consultation with field operators, while monitoring the reaction of the process towards achieving the desired state.”
So, operations staff do not just follow the instructions blindly; they read what to do, consider the impact according to their personal mental model, and the mental model of the field operators, then take the appropriate action in consulting with their peers.
The Yin and Laberge study confirmed that the operator’s mental model largely comes from the time they’ve working as field operators. Specifically, they identified the following key areas in field-based learning that formed the basis of mental modeling for most people:
Walking through the process to gain line-up and layout knowledge
Educational programs to learn how the process works internally
Where and why equipment is in a certain location
What happens in each component
Educating themselves on standard operating procedures
But, of utmost and critical importance was their personal experience handling incidents and upsets. What happened, why it happened, the process reactions, and how they were able to bring the process back to expected conditions.
We see this in other research that compared performance of junior and senior operators as well. The results have shown that junior operators whom constantly adjusted the control parameters achieved the lowest operations performance, whereas a senior operations group took few actions and achieved very high results.
This is expected to be the result of a highly developed and detailed mental model, which can clearly predict the result of any actions taken in the process.
Why are Mental Models imperfect?
It’s simple: a standardized training course does not result in standardized knowledge. Each operator joins a company and facility with a personal background (e.g. working experience from other facilities). Even if two operators share experience from the same facility, their experiential learning is most certainly different. It is the difference in experiencing various incidents or excursions that result in developing “Mental Models” that are vastly different.
When operators write shift handover reports, the reports are based on one assumption – one BIG assumption! The assumed fact is that all staff members have a shared thought process and common understanding, that is, in line with their personal “Mental Model.” Herein lies the problem – this assumption leads to miscommunication, lack of a common understanding, and potential incidents.
Mental models are a key component of the decision making process. We have ZERO ability for 100% standardization, so, we are left in a rather ambiguous and challenging situation, aren’t we?
How Does This Impact Shift Handover?
As mentioned in the introduction, we’ve seen that shift handover is one of the highest risk periods in facility operations and that the outgoing operator decides himself/herself on what is important enough to be passed to the incoming operator. But, critically, we know that what he/she decides is based on their personal mental model.
But what happens when the mental model of the incoming and outgoing operators do not align…?
Yin and Laberge (2010) rightly identified that even in the space of 12 hours, the facility can undergo significant change:
“As equipment conditions change during a shift, operators returning back to work after a 12 hour period of rest may find that the process units are now operating in a vastly different operating mode.”
Of course all operators will advise the incoming shift at handover if they changed the operating mode — but, will they explain that when changing the operating mode they followed an “alternative” startup procedure (a procedure they deemed to be more efficient based on their own mental model)??
If the outgoing operators choose not to share this information, it is (most likely) because it may beassumedor ‘a given’ in their minds, based on their personal mental model and it’s within the boundaries of the formal SOP. But the fact is, the startup procedure that the outgoing shift followed may be seen as an alternative procedure by the incoming shift.
In other words, the outgoing shift will perceive this method as “standard” enough to be embedded in their mental model and not important to include in the shift handover, while the reality is that the incoming shift have been left in the dark. These small lacks in mutual understanding can lead to large disruptions down the line.
Bi-Directional Shift Handover
Whilst the risks in miscommunication are significant, the resolution is quite simple…Clear, Concise, Person-to-Person communications. But, putting this into practice day-in and day-out is where the challenge lies.
An operator’s role often alternates between periods of quiet and periods of intense activity, and in many cases work 12-hour shifts resulting in mental fatigue by the time their shift is completed.
From the perspective of the outgoing shift, the challenge in overcoming the disparity in mental models is expected to reside in 2 key areas, and require more clarity. They are:
Timing of when key information occurs
Mental fatigue at handover / report creation time
Often times, the information that is of most importance for safe and efficient shift-handovers occur during the periods of intense activity. As a result the operator typically takes notes in short-form and with minimal effort. The operator will sometimes deliberately exclude information because it is deeply embedded in his/her mental model. This excluded information would be considered normal or a standard result of a previously noted action – by the operator who wrote it down.
When handover reports are left unwritten until the end of shift, in many cases the key challenge is for the operator to then remember the fine details that occurred up to 6-12 hours prior.
Based on the above information, it is critically important for operators to see that the creation of shift-handover reports must be an ongoing process that begins from the start of their shift. The reports are to be periodically and critically reread from the perspective of a junior operator, ensuring a final report that is crystal clear.
The second component for effective shift handover is human-to-human interaction: both incoming and outgoing operators should review the full shift handover report together as a team – interpersonal communication is the key.
In particular, the following process should be conducted for each item in the shift handover report:
Outgoing operator: Describe the entry
Incoming operator: Ask any and all pertinent questions
Incoming operator: Describe the results he would expect
Outgoing operator: Confirm or Correct expected results
By communicating these four steps in each shift handover, for every entry, there are multiple checking stages. These stages assist to ensure that the knowledge has been mapped and recorded correctly, and in a sense compares the mental model of the incoming and outgoing operators – validating logic and avoiding technical misunderstanding.
Whilst there are some shining examples of best-practice, as an industry we still have large areas for improvement when it comes to shift handover. From both the vendor and users perspective, we must continue to find methods and technologies that enable operators to take notes during their shift – accurately, easily, clearly and concisely – whilst having minimal impact on the operator’s task at hand.
Most importantly, we must strive to create corporate cultures that promotes questioning. Ensuring that all staff are willing and able to question their colleagues on any matter, for the benefit and safety of the company, is vital, and will continue to be vital in the future as the industry becomes more competitive. Particularly when it comes to shift handover, all staff must be willing and able to question even the most highly regarded and respected staff to ensure a safe and stable environment.
Based on our knowledge and industry research, at Yokogawa we are working strongly across various industries in the area of shift handovers and shift handover reporting to support our customers in realizing industry best practices.
In the spirit of Co-Innovation we have been steadily building and enhancing our Real-Time Production Organizer – Logbook, Work Instruction and Shift Handover modules to realize this vision.
I would like to share very good article about “How LNG is LIQUEFIED Intro, adsorption cooling then controls” Written by John Lozinski
Gas is delivered to the processing plant by trucks train or sometimes pipeline. The gas must be cleaned removing all impurities and water that would interfere with the freezing process of methane. Water is removed by adsorption so ice will not form in the liquefaction process. Adsorption is the process in which gas liquid or dissolved solids adhere to a surface of the adsorbent.
The next step is liquefaction. There are three types of liquefaction cycles: Mixed Refrigerants, Turbo expansion and Cascade.
LNG liquefaction is basically the same as a modern refrigerator except the temperature required is negative 161 degrees Celsius or minus 258 Fahrenheit. The temperature is reduced by the Joule-Thomson effect to -161C the temperature at which methane liquefies. The process is similar to refrigeration to drop the temperature down to minus 258 degrees F or negative 161 degrees C.
Often an adsorbent is used to remove mercury. Mercury, when present in gas processing facilities, can be a primary cause of corrosion, equipment failure and downstream catalyst deactivation. Mercury has low vapor pressure and low solubility and is liquid at room temperature.
Each gas field has varied levels of mercury parts per billion. The brazed aluminum heat exchangers commonly found in Liquefied Natural Gas plants and petrochemical plants are particularly susceptible to liquid-metal embrittlement caused by mercury. Varied processes are used to remove most of the mercury including carbon and additives because it causes failures of brazed aluminum heat exchangers.
When mercury is present at very low concentrations in relatively large gas streams powdered adsorbents or pellet adsorbents can be used to remove it. The powdered or pellet adsorbents can be injected into the gas stream and, after an appropriate residence time filtered out in a dust collector.
Most systems are designed to remove trace amounts of Mercury from the gas stream. Mercury needs to be removed as over decades it has been the cause of failure of heat exchangers.
Amine absorbers are used to remove acid gas and impurities. Process removes excess heavy gas components like propane, butane and pentanes with amine absorber or acid gas removal. These can be sold. At this point the cooling begins.
Cooling requires an enormous amount of energy. To this end a number of solutions exist. The Methods used are called C3-MR, AP-X, Cascade, DMR and SMR. You can Google how these methods work exactly. Let me say all these methods require large refrigeration Compressors. Many solutions including gas turbines heat exchangers and large diesels are used to drive them. Designs and solutions vary and with modernization come improved solutions.
One method of liquefaction is using heat exchangers between cooling gasses and refrigerants similar to a refrigerator. Most of us have heard of automotive intercoolers for turbo chargers. You might even know there are air to air and liquid to air intercoolers for car turbos.
What LNG does is cooling liquids to methane gas that they want to liquefy. The refrigerant is liquefied by compressing gaseous refrigerant materials like nitrogen, methane, ethane and propane used as coolants. Processes use heat exchanger cool boxes condensing and LNG sub-coolers using nitrogen coolant.
The cascade method uses propane, ethylene and methane as coolants. This is done in a progression and the cycle continues so as to achieve the highest possible efficiency. DMR uses mixed coolant in Shells process while SMR uses only one type of coolant.
Aspen HYSYS® the best process simulation software for the oil and gas industry is continuously upgraded to add new functionalities for improved process design and safe, optimized plant operations all in a unified engineering environment.
I have collected some tutorial video for our Learning purpose. You can check out below :
1-Gas Processing – Amine Sweetening Process with Aspen hysys 7.3
2-Gas Processing – Glycol Dehydration Process with Aspen hysys 7.3
3-Gas Processing – Dew Point Control by: (1/2) Heavy Hydrocarbons Removal Process – Aspen hysys 7.3
4-Gas Processing – Dew Point Control by: (2/2) Joule Thomson Plant with Aspen hysys 7.3
Onshore Dukhan is a large oil and gas field extending over an area of approximately 80 kms by 8 kms and is located about 80 kms to the West of Doha. It produces crude oil, associated gas, condensate and non-associated gas.
Dukhan Field is comprised of three sectors from North to South – Khatiyah, Fahahil and Jaleha/ Diyab.Oil and gas are separated in four main degassing stations – Khatiyah North, Khatiyah Main, Fahahil Main and Jaleha. Satellite stations are Fahahil North and Fahahil South and Khatiyah South. The Diyab satellite station at the south end of the field has no process facilities and the total oil production is sent to Jaleha station for processing. Stabilized crude oil is transported through pipeline to Mesaieed port about 100 kms east of Dukhan.
Dukhan oil field has production facilities to produce up to 335,000 barrels per day (b/d). However actual annual production is based on reservoir management requirements.Dukhan Field has a total of 300 oil-producing wells, 182 water injection wells and 58 gas-producing and injector wells. According to the latest well status the total number of wells in Dukhan is 605, including all production, injection, observation, closed-in and abandoned wells.
The first shipment of oil was transported from Dukhan on December 31, 1949 through Mesaieed port terminal. Dukhan crude is of high quality, with an API specific gravity of 40 degrees and sulfur content of 1.5%.
Qatar Petroleum’s latest corporate video traces the corporation’s history and the State of Qatar’s rapid development over the years. Using a combination of animations, aerial shots, old footage, graphics and other special effects, the video tells an engaging story of how the sustainable.