Pinch Point Analysis

Pinch Point Analysis is a systematic process design methodology consisting of a number of concepts and techniques that ensure an optimal use of energy. The Pinch is characterized by a minimum temperature difference between hot and cold streams and designates the location where the heat recovery is the most constraint.

The fundamental computational tool is the Problem Table algorithm. This tool allows the identifications of the Pinch, as well as of targets for hot and cold utilities.

The net heat flow across Pinch is zero. Consequently, the system can be split into two stand-alone subsystems, above and below the Pinch. Above the Pinch there is need only for hot utility, while below the Pinch only cold utility is necessary. For given ΔTmin the hot and cold utility consumption identified so far becomes Minimum Energy Requirements (MER). No design can achieve MER if there is a cross-pinch heat transfer.

The partition of the original problem in subsystems may introduce redundancy in the number of heat exchangers. When the capital cost is high, it might be necessary to remove the Pinch constraint in order to reduce the number of units. The operation will be paid by supplementary energetic consumption, which has to be optimized against the reduction in capital costs.

The result is that heat recovery problem becomes an optimization of both energy and capital costs, constraint by a minimum temperature approach in designing the heat exchangers. Stream selection and data extraction are essential in Pinch Analysis for effective heat integration.

The key computational assumption in Pinch Point Analysis is constant CP on the interval where the streams are matched. If not, stream segmentation is necessary

The counter-current heat flow of the streams selected for integration may be represented by means of Composite Curves (CC). Another diagram, Grand Composite Curve (GCC) allows the visualization of the excess heat between hot and cold streams against temperature intervals. This feature helps the selection and placement of utilities, as well as the identification of the potential process/process matches.

The synthesis of a Heat Exchanger Network consists of three main activities:

  • Set a reference basis for energy integration, namely:

-Minimum Energy Requirements (MER)

-Utility selection and their placement

-Number of units and heat exchange area

-Cost of energy and hardware at MER

  • Synthesis of heat exchanger network (HEN) for minimum energy requirements and maximum heat recovery. Determine matches in subsystems and generate alternatives.
  • Network optimization. Reduce redundant elements, as small heat exchangers, or small split streams. Find the trade-off between utility consumption, heat exchange area and number of units. Consider constraints

The improvement of design can be realized by Appropriate Placement and Plus/Minus principle. Appropriate Placement defines the optimal location of individual units against the Pinch. It applies to heat engines, heat pumps, distillation columns, evaporators, furnaces, and to any other unit operation that can be represented in terms of heat sources and sinks.

The Plus/Minus principle helps to detect major flow sheet modifications that can improve significantly the energy recovery. Navigating between Appropriate Placement, Plus/Minus Principle and Targeting allows the designer to formulate near-optimum targets for the heat exchanger network, without ever sizing heat exchangers.

Pinch Point principle has been extended to operations involving mass exchange. Saving water can be treated systematically by Water Pinch methodology. Similarly, Hydrogen Pinch can efficiently handle the inventory of hydrogen in refineries. Other applications of industrial interest have been developed in the field of waste and emissions minimization. The systematic methods in handling the integration of mass-exchange operations are still in development. In this area the methods based on optimization techniques are very promising.

RO/DI Water Systems

RO/DI stands for Reverse Osmosis and Deionization. The product is a multi-stage water filter, which takes in ordinary tap water and produces highly purified water.

Tap water often contains impurities that can cause problems. These may include phosphates, nitrates, chlorine, and various heavy metals. Excessive phosphate and nitrate levels can cause an algae bloom. Copper is often present in tap water due to leaching from pipes and is highly toxic to invertebrates. An RO/DI filter removes practically all of these impurities.

There are typically four stages in a RO/DI filter:

  • Sediment filter
  • Carbon block
  • Reverse osmosis membrane
  • Deionization resin

If there are less than four stages, something was left out. If there are more, something was duplicated.

The sediment filter, typically a foam block, removes particles from the water. Its purpose is to prevent clogging of the carbon block and RO membrane. Good sediment filters will remove particles down to one micron or smaller.

The carbon, typically a block of powdered activated carbon, filters out smaller particles, adsorbs some dissolved compounds, and deactivates chlorine. The latter is the most important part: free chlorine in the water will destroy the RO membrane.

The RO membrane is a semi-permeable thin film. Water under pressure is forced through it. Molecules larger/heavier than water (which is very small/light) penetrate the membrane less easily and tend to be left behind.

The DI resin exchanges the remaining ions, removing them from the solution.

There are three types of RO membrane on the market:

  • Cellulose Triacetate (CTA)
  • Thin Film Composite (TFC)
  • Poly-Vinyl Chloride (PVC)

The difference between the three concerns how they are affected by chlorine: CTA membranes require chlorine in the water to prevent them from rotting. TFC membranes are damaged by chlorine and must be protected from it. PVC membranes are impervious to both chlorine and bacteria.

Reverse osmosis typically removes 90-98% of all the impurities of significance to the aquarist. If that is good enough for your needs, then you don’t need the DI stage. The use of RO by itself is certainly better than plain tap water and, in many cases, is perfectly adequate.

RO by itself might not be adequate if your tap water contains something that you want to reduce by more than 90-98%.

A DI stage by itself, without the other filter stages, will produce water that is pretty much free of dissolved solids. However, DI resin is fairly expensive and will last only about 1/20th as long when used without additional filtration. If you’re only going to buy either a RO or a DI, it would be best to choose the RO, unless you only need small amounts of purified water.

Duplicating stages can extend their life and improve their efficiency. For example, if you have two DI stages in series, one can be replaced when it’s exhausted without producing any impure water. If you have both a 5-micron sediment filter and a 1-micron filter, they will take longer to clog up. If there are two carbon stages, there will be less chlorine attacking the TFC membrane. Whether the extra stages are worth the extra money is largely a matter of circumstance and opinion.

RO/DI capacities are measured in gallons per day (GPD), and typically fall within the 25-100 GPD range. The main difference between these units is the size of the RO membrane. Other differences are (a) the flow restrictor that determines how much waste water is produced, (b) the water gets less contact time in the carbon and DI stages in high-GPD units than low-GPD units, and (c) units larger than 35 GPD typically have welded-together membranes.

As a result of the membrane welding and the reduced carbon contact time, RO membranes larger than 35 GPD produce water that is slightly less pure. This primarily affects the life of the DI resin.

Most aquarists won’t use more than 25 GPD averaged over time. If you have a decent size storage container, that size should be adequate. A higher GPD rating comes in handy, however, when filling a large tank for the first time or in emergencies when you need a lot of water in a hurry.

The advertised GPD values assume ideal conditions, notably optimum water pressure and temperature. The purity of your tap water also affects it. In other words, your mileage will vary.

An RO filter has two outputs: purified water and wastewater. A well-designed unit will have about 4X as much wastewater as purified water. The idea is that the impurities that don’t go through the membrane get flushed out with the wastewater.

There is nothing particularly wrong with the wastewater except for a slightly elevated dissolved solid content. It may actually be cleaner than your tap water because of the sediment and carbon filters. Feel free to water your plants with it.

What is BIM?

The Handbook of BIM (Eastman, Teicholz, Sacks & Liston 2011) defines, “With BIM (Building Information Modeling) technology, one or more accurate virtual models of a building are constructed digitally. They support design through its phases, allowing better analysis and control than manual processes. When completed, these computer-generated models contain precise geometry and data needed to support the construction, fabrication, and procurement activities through which the building is realized.”

BIM or Building Information Modeling is a process for creating and managing information on a construction project across the project lifecycle. One of the key outputs of this process is the Building Information Model, the digital description of every aspect of the built asset. This model draws on information assembled collaboratively and updated at key stages of a project. Creating a digital Building Information Model enables those who interact with the building to optimize their actions, resulting in a greater whole life value for the asset.

B is for Building.

The key point to remember here is that “building” doesn’t mean “a building.” BIM can be used for so much more than designing a structure with four walls and a roof. This preconceived notion of “building” comes from its roots—in an etymological sense, it quite literally means “house.”

In order to get the true gist of BIM, however, it helps to think of the word “building” in terms of the verb “to build.”

BIM is a process that involves the act of building something together, whether it relates to architecture, infrastructure, civil engineering, landscaping or other large-scale projects.

I is for Information.

And that information is embedded into every aspect of your project. This is what makes BIM “smart.”

Every project comes with a staggering amount of information, from prices to performance ratings and predicted lifetimes. It tells your project’s life story long before the ground is ever broken and it will help track potential issues throughout your project’s lifetime. BIM is a way to bring all of these details into one place so it’s easy to keep track of everything.

M is for Modeling.

In BIM, every project is built twice—once in a virtual environment to make sure that everything is just right and once in a real environment to bring the project to life.

This step is the overview of every other aspect of the building and its information. It provides the measure or standard for the building project—an analogy or smaller-scale representation of the final appearance and effect. It will continue to model this representation throughout the building’s lifespan.

This model can become a tool for the building owner’s reference long after construction is completed, helping to inform maintenance and other decisions. It’s also the step that will help to sell a concept while condensing all of those other layers of information that show the building’s every detail.

How can BIM help you?

BIM brings together all of the information about every component of a building, in one place. BIM makes it possible for anyone to access that information for any purpose, e.g. to integrate different aspects of the design more effectively. In this way, the risk of mistakes or discrepancies is reduced, and abortive costs minimized.

BIM data can be used to illustrate the entire building life-cycle, from cradle to cradle, from inception and design to demolition and materials reuse. Spaces, systems, products and sequences can be shown in relative scale to each other and, in turn, relative to the entire project. And by signalling conflict detection BIM prevents errors creeping in at the various stages of development/ construction.

What is a BIM object?

A BIM object is a combination of many things

  • Information content that defines a product
  • Product properties, such as thermal performance
  • Geometry representing the product’s physical characteristics
  • Visualisation data giving the object a recognisable appearance
  • Functional data enables the object to be positioned and behave in the same manner as the product itself.

What is the future of BIM?

The future of the construction industry is digital, and BIM is the future of design and long term facility management; it is government led and driven by technology and clear processes; and it is implementing change across all industries. As hardware, software and cloud applications herald greater capability to handle increasing amounts of raw data and information, use of BIM will become even more pronounced than it is in current projects.

BIM is both a best-practice process and 3D modeling software. By using it, designers can create a shared building project with integrated information in a format that models both the structure and the entire timeline of the project from inception to eventual demolition.

It enables architects and engineers alike to work on a single project from anywhere in the world. It condenses a plethora of information about every detail into a workable format. It facilitates testing and analysis during the design phase to find the best answer to a problem.

It makes for easier design, simpler coordination between team members and easier structure maintenance across the entire built environment—and this is just the beginning.

Cleanroom

Typically used in manufacturing or scientific research, a cleanroom is a controlled environment that has a low level of pollutants such as dust, airborne microbes, aerosol particles, and chemical vapors. To be exact, a cleanroom has a controlled level of contamination that is specified by the number of particles per cubic meter at a specified particle size. The ambient air outside in a typical city environment contains 35,000,000 particles per cubic meter, 0.5 mm and larger in diameter, corresponding to an ISO 9 cleanroom which is at the lowest level of cleanroom standards.

Cleanroom Overview

Cleanrooms are used in practically every industry where small particles can adversely affect the manufacturing process. They vary in size and complexity, and are used extensively in industries such as semiconductor manufacturing, pharmaceuticals, biotech, medical device and life sciences, as well as critical process manufacturing common in aerospace, optics, military and Department of Energy.

A cleanroom is any given contained space where provisions are made to reduce particulate contamination and control other environmental parameters such as temperature, humidity and pressure. The key component is the High Efficiency Particulate Air (HEPA) filter that is used to trap particles that are 0.3 micron and larger in size. All of the air delivered to a cleanroom passes through HEPA filters, and in some cases where stringent cleanliness performance is necessary; Ultra Low Particulate Air (ULPA) filters are used.

Personnel selected to work in cleanrooms undergo extensive training in contamination control theory. They enter and exit the cleanroom through airlocks, air showers and/or gowning rooms, and they must wear special clothing designed to trap contaminants that are naturally generated by skin and the body.

Depending on the room classification or function, personnel gowning may be as limited as lab coats and hairnets, or as extensive as fully enveloped in multiple layered bunny suits with self-contained breathing apparatus.
Cleanroom clothing is used to prevent substances from being released off the wearer’s body and contaminating the environment. The cleanroom clothing itself must not release particles or fibers to prevent contamination of the environment by personnel. This type of personnel contamination can degrade product performance in the semiconductor and pharmaceutical industries and it can cause cross-infection between medical staff and patients in the healthcare industry for example.

Cleanroom garments include boots, shoes, aprons, beard covers, bouffant caps, coveralls, face masks, frocks/lab coats, gowns, glove and finger cots, hairnets, hoods, sleeves and shoe covers. The type of cleanroom garments used should reflect the cleanroom and product specifications. Low-level cleanrooms may only require special shoes having completely smooth soles that do not track in dust or dirt. However, shoe bottoms must not create slipping hazards since safety always takes precedence. A cleanroom suit is usually required for entering a cleanroom. Class 10,000 cleanrooms may use simple smocks, head covers, and booties. For Class 10 cleanrooms, careful gown wearing procedures with a zipped cover all, boots, gloves and complete respirator enclosure are required.

Cleanroom Air Flow Principles

Cleanrooms maintain particulate-free air through the use of either HEPA or ULPA filters employing laminar or turbulent air flow principles. Laminar, or unidirectional, air flow systems direct filtered air downward in a constant stream. Laminar air flow systems are typically employed across 100% of the ceiling to maintain constant, unidirectional flow. Laminar flow criteria is generally stated in portable work stations (LF hoods), and is mandated in ISO-1 through ISO-4 classified cleanrooms.

Proper cleanroom design encompasses the entire air distribution system, including provisions for adequate, downstream air returns. In vertical flow rooms, this means the use of low wall air returns around the perimeter of the zone. In horizontal flow applications, it requires the use of air returns at the downstream boundary of the process. The use of ceiling mounted air returns is contradictory to proper cleanroom system design.

RESOURCE OPTIMIZATION

In today’s industrial age, where manufacturing processes are highly crucial and a synonym of development and growth, the need to use resources effectively and efficiently has become necessary. The continuous growth of industries has led to development of highly efficient or leaner processes which focus on minimum wastage and maximum utilization of the available resources through various technologies developed overtime. The use of robots and automating the processes in order to eliminate human error and increase efficiency has been adopted by almost every industry today which has further been facilitated by the Internet of Things (I0T) in developing smarter processes.

Utility optimization not only consists of handling resources in a smart manner, but also optimizing the path or manner in which they are handled. Adjusting the placement of machines as well as defining the flow of resources throughout the shop floor is also an integral part of the utility optimization process. An efficient flow ensures an efficient execution of process and minimum wastage of time and resources. This is usually done through the use of process flow charts do determine process steps as well as Pareto charts to determine the importance of every resource in terms of its usage and need in every process.

In order to execute resource optimization and make sure that it is continuously being carried out, energy audits and water audits can be done which track the energy needs of an organization and track the water consumption by the organization respectively. The audits not only provide feedback about the status of optimization within the organization, but also help in tracking the development in this area and accordingly set targets. Even though these audits are a bit time consuming but they are highly necessary as they help the organization stay aligned with their set targets.

Optimization of resource usage not only decreases the amount of waste generated, but also leads to greater profits and creates opportunities for recycling and reusing the wasted resources. In a lot of cases, resource optimization leads to a reduction in carbon footprint which is vital due to the currently degrading environmental conditions. Since India agreed to ratify the second commitment period (2013-2020) of the 1997 Kyoto Protocol for the reduction of Greenhouse Gases and thus reduce the carbon footprint, the need for cutting emissions and correspondingly minimizing waste through resource optimization has gained more importance. The rising trend of green technologies has facilitated in optimization as well as cutting down on energy usage and reducing emissions.

The whole world is currently progressing at an unbelievable rate and the environment is getting affected due to that very progress Resource optimization, hence, has become necessary not only for generating greater profits and minimizing wastage of resources, but also for sustainability.  “Recycle and Reuse” has become the motto for every major organization and new ways to optimize resource usage are constantly being researched and put into use. Since the progression of technology is inevitable, there will always be a great need for effective resource optimization processes which contribute to both- organization’s profits as well as sustainability.

Project Management

Project Management Institute, Inc. (PMI) defines project management as “the application of knowledge, skills, tools and techniques to a broad range of activities in order to meet the requirements of a particular project.” Project management is the discipline of using established principles, procedures and policies to manage a project from conception through completion. It is often abbreviated as PM.

Project management oversees the planning, organizing and implementing of a project. A project is an undertaking with specific start and end parameters designed to produce a defined outcome, such as a new computer system. A project is different from ongoing processes, such as a governance program or an asset management program.

The project management plan is expected to effectively and efficiently guide all aspects of a project from start to finish, with the ideal goal of delivering the outcome on time and on budget. A project plan often begins with a project charter, and it is expected to identify potential challenges in advance and handle any roadblocks as they arise in order to keep the project on schedule.

The process of directing and controlling a project from start to finish may be further divided into 5 basic phases:

Project conception and initiation- An idea for a project will be carefully examined to determine whether or not it benefits the organization. During this phase, a decision making team will determine whether the project is feasible and whether they have the resources to take on the project.

Project definition and planning- A project plan, project charter and/or project scope may be put in writing, outlining the work to be performed. During this phase, a team should prioritize the project, calculate a budget and schedule, and determine what resources are needed.

Project launch or execution- Resources’ tasks are distributed and teams are informed of responsibilities. This is a good time to bring up important project related information.

Project performance and control- Project managers will compare project status and progress to the actual plan, as resources perform the scheduled work. During this phase, project managers may need to adjust schedules or do what is necessary to keep the project on track.

Project close- After project tasks are completed and the client has approved the outcome, an evaluation is necessary to highlight project success and/or learn from project history.

Projects and project management processes vary from industry to industry; however, these are more traditional elements of a project. The overarching goal is typically to offer a product, change a process or to solve a problem in order to benefit the organization.

Responsibilities of a project manager

Business leaders recognize project management as a specific function within the organization and hire individuals specifically trained in this discipline — i.e., project managers — to handle their organization’s project management needs.

Project managers can employ various methods and approaches to run projects, generally selecting the best approach based on the nature of the project, organizational needs and culture, the skills of those working on the projects, and other factors.

Managing a project involves multiple steps. Although the terminology for these steps varies, they often include:

  • Defining project goals;
  • Outlining the steps needed to achieve those goals;
  • Identifying the resources required to accomplish those steps;
  • Determining the budget and time required for each of the steps, as well as the project as a whole;
  • Overseeing the actual implementation and execution of the work; and
  • Delivering the finished outcome.

As part of a strong project management plan, project managers implement controls to assess performance and progress against the established schedule, budget and objectives laid out in the project management plan. This is often referred to as the project scope.

Because projects often require teams of workers who do not typically work together, effective project management requires strong communication and negotiation skills. Project managers also need to work closely with the multiple stakeholders who have interests in any given project, another area where strong communication and negotiation skills are essential.

Software Validation

Validation is a critical tool to assure the quality of computer system performance. Computer system software validation increases the reliability of systems, resulting in fewer errors and less risk to process and data integrity.
Computer system validation also reduces long term system and project costs by minimizing the cost of maintenance and rework.

Software Validation commences with a user requirement document (URS). URS is prepared to describe the critical functionalities those are required for our analysis. It is essential that the document is properly scoped in order that the procurement, installation, commissioning, validation, user training, maintenance, calibration and cleaning tasks are all investigated and defined adequately.

To scope and define an adequate validation procedure the URS has to be detailed sufficiently for various assessments to be made. The main assessment that concerns with qualification documentation is the risk assessment. This assessment is only concerned with ensuring that the degree of validation that is proposed; is compliant with the regulatory requirements.

So at this early stage it is required to execute a Validation Risk Assessment protocol against the end user’s requirements. This step is purely to ensure that the more obscure pieces of ancillary equipment and support services are fully understood and their requirement investigated, priced and included in the final issue of the URS; which will be sent out with the Request to Tender. This is an essential stage if the URS is to accurately define what depth and scope of validation is appropriate for the verification that the software will deliver all the requirement detailed in the URS.

The outcome of the Validation Risk Assessment (VRA) drives a split in software validation documentation scope, if the VRA categorizes the software validation as requiring Full Life Cycle Validation (FLCV); then a considerable amount of the software validation effort is put into establishing how the software originated, was designed and developed, in order to establish that its basic concept and development can be considered robust, sound and in accordance with best practices.

The original development plans; code reviews, methods reviews and testing plans must be available to enable this software validation documentation to be executed successfully. Once this proof of quality build is established, validation then follows a more convention path in inspections and verifications.

Software that is not classified as requiring FLCV treatment does not require this depth of verification into quality build history and is validated mainly by the more convention path in inspections and verifications.

Dynamic Testing

Dynamic testing verifies the execution flow of software, including decision paths, inputs, and outputs. Dynamic testing involves creating test cases, test vectors and oracles, and executing the software against these tests. The results are then compared with expected or known correct behavior of the software. Because the number of execution paths and conditions increases exponentially with the number of lines of code, testing for all possible execution traces and conditions for the software is impossible.

Static Analysis

Code inspections and testing can reduce coding errors; however, experience has shown that the process needs to be complemented with other methods. One such method is static analysis. This somewhat new method largely automates the software qualification process. The technique attempts to identify errors in the code, but does not necessarily prove their absence. Static analysis is used to identify potential and actual defects in source code.

Abstract Interpretation Verification

A code verification solution that includes abstract interpretation can be instrumental in assuring software safety and a good quality process. It is a sound verification process that enables the achievement of high integrity in embedded devices. Regulatory bodies such as the FDA and some segments of industry recognize the value of sound verification principles and are using tools based on these principles.

Risk Based Inspection

A Risk Based Inspection (RBI) is basically a risk analysis of operational procedures. It assesses the safety risks and plant integrity that exists and further prepares it for possible inspections. The end result is a document that outlines, measures and defines organizational procedures based on standards, codes and best practices.

Generally, RBI’s are used when a company wants to change the required frequency of inspection for pressure-rate vessels. This is applicable to the mechanical integrity element of a Process Safety Management (PSM) plan.

Equipment used to process, store, or handle highly hazardous chemicals has to be designed, constructed, installed, and maintained to minimize the risk of releases of such chemicals. This requires that a mechanical integrity program be in place to ensure the continued integrity of process equipment.

Elements of a mechanical integrity program include identifying and categorizing equipment and instrumentation, inspections and tests and their frequency; maintenance procedures; training of maintenance personnel; criteria for acceptable test results; documentation of test and inspection results; and documentation of manufacturer recommendations for equipment and instrumentation.

Where there might be a bit of overlap/similarity in RBI and PSM is in the area of mechanical integrity with regard to structural engineering. Structural engineering is an important field of engineering that deals with the integrity of objects such as plant components or structures and serves the industry by performing analytical assessments, experiments, walkdowns or numerical modeling. Some companies specialize in supporting industrial process facilities and power plants.

In plants, the structural challenges are often related to pressure, temperature and dynamic forces. An example is the seismic adequacy of piping or components under power operation. Engineers perform seismic walk downs on a regular basis to screen for the seismic adequacy of systems. Several specialty engineers and contractors have undergone professional seismic training which also allows them to assess safety-related electrical components such as instrumentation and control components, etc.

Proper application of structural engineering expertise can help mitigate issues by ensuring that the plant and components are properly engineered. This will avoid machinery breakdown and costly plant outages. The goal is to support customers to achieve a safer and more efficient work environment along with enhanced plant durability.

Thus, for several aspects of RBI and PSM, an engineering firm with testing labs are ideal in providing a one-stop-resource for structural engineering issues including analyzing a problem, engineering a solution, verification, as well as oversight of fabrication and installation, as required.

Benefits to having Risk Management Services are:

  • Understand and address hazards that pose the highest level of risk to your process facility
  • Ensure compliance with relevant national, local and industry standards
  • Implement best engineering practices
  • Reduce overall level of risk
  • Increase productivity and employee morale
  • Make organization more competitive
  • Decrease insurance premiums
  • Combustible Dust Hazard Analysis (DHA) Explosion and Fire Hazard Evaluation

An onsite assessment provides an experienced engineer to visit a facility, evaluate its compliance with relevant national, local and industry standards and provide recommendations for risk reduction. Additional services can include deflagration vent sizing calculations, desktop reviews, equipment selection guidance, and training of personnel on combustible dust hazards as well as development of process safety programs to address these issues.

Risk-based inspection is a means of using inspection resources more cost-effectively and with confidence. The method ensures that you are complying with current safety regulations and also enables you to make inspection decisions informed by greater information and expertise, thereby saving time and money.

Plant operators face an increasingly complex challenge when managing the integrity of assets: to achieve operational excellence and maximum asset performance while minimizing costs and maintaining the highest safety and environmental standards.

Risk-based inspection principles offer an established methodology for efficient plant maintenance and, with Panorama’s expertise,we can work with you to develop cost-effective management solutions.

Risk Assessment

Many people interchange hazard and risk on a daily basis. Unfortunately, they are actually two different concepts. The difference may not be as much as an issue for the everyday conversation, but when it comes to risk assessment and control, it is extremely important. Below you will gain a better understanding of the difference between the two and why the difference is so important.

The basic difference is that a hazard is something that will cause harm, while a risk is the possibility that a hazard may cause harm. Although they are used synonymously, knowing the difference could save your life or allow you to enjoy it more thoroughly.

In essence, a hazard will not be risky unless you are exposed to enough of it that it actually causes harm; the risk itself may actually be zero or it may be greatly reduced when precautions are taken around that hazard.

The simple relationship between the two is that you have to have exposure to a hazard to experience a risk. Thus, it is vital that you know the level of exposure you are going to have to the hazard to better understand how much risk is actually involved.

Risk Assessment Methods

There are a variety of risk assessment methods for the various categories. When it comes to the difference between hazard and risk, several categories may use different measurements and methods. As an example, the way risk is assessed in human health may be different from the risk assessment for project management.

Why Use a Risk Assessment Method?

A risk assessment is a tool used to determine the potential results from any given hazard. The assessment uses a combination of situational information, previous knowledge about the process, and judgments made from the knowledge and information.Since the risk is the potential damage done by a hazard, there are certain outcomes that any good risk assessment needs to have.

There are six main outcomes that are needed to have an effective risk assessment. By the end of the assessment you should know:

  • Any situations that may be hazardous
  • Which method is appropriate to use when determining the likelihood the hazard will occur
  • Alternative solutions for reducing and eliminating the risk or any negative consequences the may occur
  • More information for making a decision about risk management
  • Estimation for the uncertainly of the analysis

Steps of a Risk Assessment

Step 1: Discover the hazards. You can do this by using several different strategies such as walking around the area, navigating through portfolios and databases, or asking people who are around.

Step 2: Determine who may be harmed and how they may be harmed. After discovering the hazards you will need to determine who may be harmed by them, as well as how they may be harmed.

Step 3: Analyze the amount of risk and how you can control them. You may find that you can simply remove the hazard. If not, then decide which control method will be best to use to reduce the amount of risk.

Step 4: Document your assessment and results. It is important that you document what you find. This is done for legal reasons to protect you, the location, and any possible persons that may be involved. You also want to be sure that you write down your next plan of action – what control measures you are going to take.

Step 5: Regularly review and update your assessment. It is great to think that once the hazard is gone that all risks of harm are gone. This is not true. In some cases the hazard may return and in other new hazards may develop. Regularly checking will keep you and everyone around safe.

Risk Control Methods

Knowing the difference between hazard and risk leads to risk control. Risk is controlled when your business takes actions that help eliminate safety risks as much as you are able to do so. If it is not possible to completely eliminate the risk, controlling your risk may mean that you are taking actions to minimize the risks and hazards within the work environment.

There are four main methods that can be used to eliminate or minimize these risks – avoidance, loss prevention & reduction, transfer, and acceptance.

1. Avoidance

This is by far the easiest way to control any risk. When you decide to use this method, you find all possibly hazardous activities and stop them. It is important that you remember when choosing this option you may also miss out on other opportunities and gains.

2. Loss Prevention & Reduction

Using this method you will reduce the frequency and severity of a specific loss. You may decide to increase security measures or improve maintenance, or you may create rules that require your employees to wear certain safety gear.

3. Transfer

When you choose this method you will create a contract with a third party to deal with that risk. A couple great examples would be hiring a security company to improve security or hiring a cleaning crew to ensure health hazards are cleaned up.

4. Acceptance

This last method is not to be taken lightly. When you feel that transfer or loss prevention & reduction methods are not necessary or are too excessive, this may be the option for you. However, it is important that you understand this could possibly be dangerous for your company. Undergoing too many losses or enduring too many negative consequences can quickly sink your business.

What is Zero Liquid Discharge?

Zero Liquid Discharge (ZLD) is a wastewater treatment process developed to completely eliminate all liquid discharge from a system. The goal of a zero liquid discharge system is to reduce the volume of wastewater that requires further treatment, economically process wastewater and produce a clean stream suitable for reuse. Companies may begin to explore ZLD because of ever-tightening wastewater disposal regulations, company mandated green initiatives, public perception of industrial impact on the environment, or concern over the quality and quantity of the water supply.

The first step to achieving ZLD is to limit the amount of wastewater that needs to be treated. Once wastewater generation is minimized and the volume of wastewater that needs to be treated is known, you can then explore what equipment is needed, which depends on the characteristics of the wastewater and its volume. A traditional approach to ZLD is to use filtration technology, funnel the reject waters to an evaporator, and send the evaporator concentrate to a crystallizer or spray dryer. However, the equipment to de-water the concentrated slurry tends to be very large and extremely expensive, which limits the cost effectiveness to only those with very large waste streams.

A common ZLD approach is to concentrate the waste water and then dispose of it as a liquid brine, or further crystallize the brine to a solid. A typical evaporator uses tube-style heat exchangers. The evaporated water is recovered and recycled while the brine is continually concentrated to a higher solids concentration. Concentrated brine is disposed of in a variety of ways, such as sending it to a publicly owned treatment works, using evaporation ponds in areas with net positive evaporative climates, or by treatment in a crystallizing system, such as a circulating-magma crystallizer or a spray dryer. Crystallized solids can be landfilled or applied to land, depending upon the crystal characteristics.

For over 30 years vapor compression evaporation has been the most useful technology to achieve zero liquid discharge. Evaporation recovers about 95 % of a wastewater as distillate for reuse. Waste brine can then be reduced to solids in a crystallizer/dewatering device. However, evaporation alone can be an expensive option when flow rates are considerable. One way to solve this problem is to integrate membrane processes with evaporation. These technologies are nowadays often combined to provide complete ZLD-systems.

The most common membrane processes used so far are reverse osmosis (RO) and electrodialysis reversal (EDR). By combining these technologies with evaporation and crystallization ZLD- systems have become less expensive. They are however combined differently depending on the circumstances. Together with these components, a variety of other well-known water treatment technologies are used in ZLD-systems for pre-treatment and polishing treatment.

These treatments are:

  • pH adjustment
  • Degasifier
  • mixed/separate bed
  • oil/water separator
  • neutralization
  • oxidation (uv , ozone, sodium hypochlorite)
  • dissolved air flotation (daf)
  • carbon adsorption
  • anaerobic or aerobic digestion

As environmental, political and public health entities place more focus on waste water management, ZLD strategies are more often being evaluated for feasibility in industrial facilities. The ZLD approach taken, however, greatly depends on the quality of water available for use.

ZLD benefits:

  • Reduction or elimination of costly regulatory compliance
  • Reliable chemical/physical processes
  • Small footprint
  • Ease of operation
  • Almost 100% water recovery
  • Almost 100% metals and chemical recovery
  • Modular construction
  • Low costs

Well-designed ZLD system will minimize the volume of liquid waste that requires treatment, while also producing a clean stream suitable for use elsewhere in the plant processes.