Pinch Point Analysis

Pinch Point Analysis is a systematic process design methodology consisting of a number of concepts and techniques that ensure an optimal use of energy. The Pinch is characterized by a minimum temperature difference between hot and cold streams and designates the location where the heat recovery is the most constraint.

The fundamental computational tool is the Problem Table algorithm. This tool allows the identifications of the Pinch, as well as of targets for hot and cold utilities.

The net heat flow across Pinch is zero. Consequently, the system can be split into two stand-alone subsystems, above and below the Pinch. Above the Pinch there is need only for hot utility, while below the Pinch only cold utility is necessary. For given ΔTmin the hot and cold utility consumption identified so far becomes Minimum Energy Requirements (MER). No design can achieve MER if there is a cross-pinch heat transfer.

The partition of the original problem in subsystems may introduce redundancy in the number of heat exchangers. When the capital cost is high, it might be necessary to remove the Pinch constraint in order to reduce the number of units. The operation will be paid by supplementary energetic consumption, which has to be optimized against the reduction in capital costs.

The result is that heat recovery problem becomes an optimization of both energy and capital costs, constraint by a minimum temperature approach in designing the heat exchangers. Stream selection and data extraction are essential in Pinch Analysis for effective heat integration.

The key computational assumption in Pinch Point Analysis is constant CP on the interval where the streams are matched. If not, stream segmentation is necessary

The counter-current heat flow of the streams selected for integration may be represented by means of Composite Curves (CC). Another diagram, Grand Composite Curve (GCC) allows the visualization of the excess heat between hot and cold streams against temperature intervals. This feature helps the selection and placement of utilities, as well as the identification of the potential process/process matches.

The synthesis of a Heat Exchanger Network consists of three main activities:

  • Set a reference basis for energy integration, namely:

-Minimum Energy Requirements (MER)

-Utility selection and their placement

-Number of units and heat exchange area

-Cost of energy and hardware at MER

  • Synthesis of heat exchanger network (HEN) for minimum energy requirements and maximum heat recovery. Determine matches in subsystems and generate alternatives.
  • Network optimization. Reduce redundant elements, as small heat exchangers, or small split streams. Find the trade-off between utility consumption, heat exchange area and number of units. Consider constraints

The improvement of design can be realized by Appropriate Placement and Plus/Minus principle. Appropriate Placement defines the optimal location of individual units against the Pinch. It applies to heat engines, heat pumps, distillation columns, evaporators, furnaces, and to any other unit operation that can be represented in terms of heat sources and sinks.

The Plus/Minus principle helps to detect major flow sheet modifications that can improve significantly the energy recovery. Navigating between Appropriate Placement, Plus/Minus Principle and Targeting allows the designer to formulate near-optimum targets for the heat exchanger network, without ever sizing heat exchangers.

Pinch Point principle has been extended to operations involving mass exchange. Saving water can be treated systematically by Water Pinch methodology. Similarly, Hydrogen Pinch can efficiently handle the inventory of hydrogen in refineries. Other applications of industrial interest have been developed in the field of waste and emissions minimization. The systematic methods in handling the integration of mass-exchange operations are still in development. In this area the methods based on optimization techniques are very promising.

RO/DI Water Systems

RO/DI stands for Reverse Osmosis and Deionization. The product is a multi-stage water filter, which takes in ordinary tap water and produces highly purified water.

Tap water often contains impurities that can cause problems. These may include phosphates, nitrates, chlorine, and various heavy metals. Excessive phosphate and nitrate levels can cause an algae bloom. Copper is often present in tap water due to leaching from pipes and is highly toxic to invertebrates. An RO/DI filter removes practically all of these impurities.

There are typically four stages in a RO/DI filter:

  • Sediment filter
  • Carbon block
  • Reverse osmosis membrane
  • Deionization resin

If there are less than four stages, something was left out. If there are more, something was duplicated.

The sediment filter, typically a foam block, removes particles from the water. Its purpose is to prevent clogging of the carbon block and RO membrane. Good sediment filters will remove particles down to one micron or smaller.

The carbon, typically a block of powdered activated carbon, filters out smaller particles, adsorbs some dissolved compounds, and deactivates chlorine. The latter is the most important part: free chlorine in the water will destroy the RO membrane.

The RO membrane is a semi-permeable thin film. Water under pressure is forced through it. Molecules larger/heavier than water (which is very small/light) penetrate the membrane less easily and tend to be left behind.

The DI resin exchanges the remaining ions, removing them from the solution.

There are three types of RO membrane on the market:

  • Cellulose Triacetate (CTA)
  • Thin Film Composite (TFC)
  • Poly-Vinyl Chloride (PVC)

The difference between the three concerns how they are affected by chlorine: CTA membranes require chlorine in the water to prevent them from rotting. TFC membranes are damaged by chlorine and must be protected from it. PVC membranes are impervious to both chlorine and bacteria.

Reverse osmosis typically removes 90-98% of all the impurities of significance to the aquarist. If that is good enough for your needs, then you don’t need the DI stage. The use of RO by itself is certainly better than plain tap water and, in many cases, is perfectly adequate.

RO by itself might not be adequate if your tap water contains something that you want to reduce by more than 90-98%.

A DI stage by itself, without the other filter stages, will produce water that is pretty much free of dissolved solids. However, DI resin is fairly expensive and will last only about 1/20th as long when used without additional filtration. If you’re only going to buy either a RO or a DI, it would be best to choose the RO, unless you only need small amounts of purified water.

Duplicating stages can extend their life and improve their efficiency. For example, if you have two DI stages in series, one can be replaced when it’s exhausted without producing any impure water. If you have both a 5-micron sediment filter and a 1-micron filter, they will take longer to clog up. If there are two carbon stages, there will be less chlorine attacking the TFC membrane. Whether the extra stages are worth the extra money is largely a matter of circumstance and opinion.

RO/DI capacities are measured in gallons per day (GPD), and typically fall within the 25-100 GPD range. The main difference between these units is the size of the RO membrane. Other differences are (a) the flow restrictor that determines how much waste water is produced, (b) the water gets less contact time in the carbon and DI stages in high-GPD units than low-GPD units, and (c) units larger than 35 GPD typically have welded-together membranes.

As a result of the membrane welding and the reduced carbon contact time, RO membranes larger than 35 GPD produce water that is slightly less pure. This primarily affects the life of the DI resin.

Most aquarists won’t use more than 25 GPD averaged over time. If you have a decent size storage container, that size should be adequate. A higher GPD rating comes in handy, however, when filling a large tank for the first time or in emergencies when you need a lot of water in a hurry.

The advertised GPD values assume ideal conditions, notably optimum water pressure and temperature. The purity of your tap water also affects it. In other words, your mileage will vary.

An RO filter has two outputs: purified water and wastewater. A well-designed unit will have about 4X as much wastewater as purified water. The idea is that the impurities that don’t go through the membrane get flushed out with the wastewater.

There is nothing particularly wrong with the wastewater except for a slightly elevated dissolved solid content. It may actually be cleaner than your tap water because of the sediment and carbon filters. Feel free to water your plants with it.

What is BIM?

The Handbook of BIM (Eastman, Teicholz, Sacks & Liston 2011) defines, “With BIM (Building Information Modeling) technology, one or more accurate virtual models of a building are constructed digitally. They support design through its phases, allowing better analysis and control than manual processes. When completed, these computer-generated models contain precise geometry and data needed to support the construction, fabrication, and procurement activities through which the building is realized.”

BIM or Building Information Modeling is a process for creating and managing information on a construction project across the project lifecycle. One of the key outputs of this process is the Building Information Model, the digital description of every aspect of the built asset. This model draws on information assembled collaboratively and updated at key stages of a project. Creating a digital Building Information Model enables those who interact with the building to optimize their actions, resulting in a greater whole life value for the asset.

B is for Building.

The key point to remember here is that “building” doesn’t mean “a building.” BIM can be used for so much more than designing a structure with four walls and a roof. This preconceived notion of “building” comes from its roots—in an etymological sense, it quite literally means “house.”

In order to get the true gist of BIM, however, it helps to think of the word “building” in terms of the verb “to build.”

BIM is a process that involves the act of building something together, whether it relates to architecture, infrastructure, civil engineering, landscaping or other large-scale projects.

I is for Information.

And that information is embedded into every aspect of your project. This is what makes BIM “smart.”

Every project comes with a staggering amount of information, from prices to performance ratings and predicted lifetimes. It tells your project’s life story long before the ground is ever broken and it will help track potential issues throughout your project’s lifetime. BIM is a way to bring all of these details into one place so it’s easy to keep track of everything.

M is for Modeling.

In BIM, every project is built twice—once in a virtual environment to make sure that everything is just right and once in a real environment to bring the project to life.

This step is the overview of every other aspect of the building and its information. It provides the measure or standard for the building project—an analogy or smaller-scale representation of the final appearance and effect. It will continue to model this representation throughout the building’s lifespan.

This model can become a tool for the building owner’s reference long after construction is completed, helping to inform maintenance and other decisions. It’s also the step that will help to sell a concept while condensing all of those other layers of information that show the building’s every detail.

How can BIM help you?

BIM brings together all of the information about every component of a building, in one place. BIM makes it possible for anyone to access that information for any purpose, e.g. to integrate different aspects of the design more effectively. In this way, the risk of mistakes or discrepancies is reduced, and abortive costs minimized.

BIM data can be used to illustrate the entire building life-cycle, from cradle to cradle, from inception and design to demolition and materials reuse. Spaces, systems, products and sequences can be shown in relative scale to each other and, in turn, relative to the entire project. And by signalling conflict detection BIM prevents errors creeping in at the various stages of development/ construction.

What is a BIM object?

A BIM object is a combination of many things

  • Information content that defines a product
  • Product properties, such as thermal performance
  • Geometry representing the product’s physical characteristics
  • Visualisation data giving the object a recognisable appearance
  • Functional data enables the object to be positioned and behave in the same manner as the product itself.

What is the future of BIM?

The future of the construction industry is digital, and BIM is the future of design and long term facility management; it is government led and driven by technology and clear processes; and it is implementing change across all industries. As hardware, software and cloud applications herald greater capability to handle increasing amounts of raw data and information, use of BIM will become even more pronounced than it is in current projects.

BIM is both a best-practice process and 3D modeling software. By using it, designers can create a shared building project with integrated information in a format that models both the structure and the entire timeline of the project from inception to eventual demolition.

It enables architects and engineers alike to work on a single project from anywhere in the world. It condenses a plethora of information about every detail into a workable format. It facilitates testing and analysis during the design phase to find the best answer to a problem.

It makes for easier design, simpler coordination between team members and easier structure maintenance across the entire built environment—and this is just the beginning.

RESOURCE OPTIMIZATION

In today’s industrial age, where manufacturing processes are highly crucial and a synonym of development and growth, the need to use resources effectively and efficiently has become necessary. The continuous growth of industries has led to development of highly efficient or leaner processes which focus on minimum wastage and maximum utilization of the available resources through various technologies developed overtime. The use of robots and automating the processes in order to eliminate human error and increase efficiency has been adopted by almost every industry today which has further been facilitated by the Internet of Things (I0T) in developing smarter processes.

Utility optimization not only consists of handling resources in a smart manner, but also optimizing the path or manner in which they are handled. Adjusting the placement of machines as well as defining the flow of resources throughout the shop floor is also an integral part of the utility optimization process. An efficient flow ensures an efficient execution of process and minimum wastage of time and resources. This is usually done through the use of process flow charts do determine process steps as well as Pareto charts to determine the importance of every resource in terms of its usage and need in every process.

In order to execute resource optimization and make sure that it is continuously being carried out, energy audits and water audits can be done which track the energy needs of an organization and track the water consumption by the organization respectively. The audits not only provide feedback about the status of optimization within the organization, but also help in tracking the development in this area and accordingly set targets. Even though these audits are a bit time consuming but they are highly necessary as they help the organization stay aligned with their set targets.

Optimization of resource usage not only decreases the amount of waste generated, but also leads to greater profits and creates opportunities for recycling and reusing the wasted resources. In a lot of cases, resource optimization leads to a reduction in carbon footprint which is vital due to the currently degrading environmental conditions. Since India agreed to ratify the second commitment period (2013-2020) of the 1997 Kyoto Protocol for the reduction of Greenhouse Gases and thus reduce the carbon footprint, the need for cutting emissions and correspondingly minimizing waste through resource optimization has gained more importance. The rising trend of green technologies has facilitated in optimization as well as cutting down on energy usage and reducing emissions.

The whole world is currently progressing at an unbelievable rate and the environment is getting affected due to that very progress Resource optimization, hence, has become necessary not only for generating greater profits and minimizing wastage of resources, but also for sustainability.  “Recycle and Reuse” has become the motto for every major organization and new ways to optimize resource usage are constantly being researched and put into use. Since the progression of technology is inevitable, there will always be a great need for effective resource optimization processes which contribute to both- organization’s profits as well as sustainability.

HAZOP Guidelines

HAZOP uses a brainstorming approach around a series of guide words designed to qualitatively identify possible deviations from normal operation and their possible impacts. Responsibilities are assigned to investigate possible solutions for each problem found.

Guidance is given on study procedure and prerequisites for an effective HAZOP, including team selection, information requirements and record keeping.

To be effective, a HAZOP study must be systematic, detailed and conducted by a balanced team with an experienced leadership.

Effective HAZOP strategy:

The effectiveness of a HAZOP will depend on:

  • the accuracy of information (including P&IDs) available to the team — information should be complete and up-to-date
  • the skills and insights of the team members
  • how well the team is able to use the systematic method as an aid to identifying deviations
  • the maintaining of a sense of proportion in assessing the seriousness of a hazard and the expenditure of resources in reducing its likelihood
  •  the competence of the chairperson in ensuring the study team rigorously follows sound procedures.

Key elements of a HAZOP are:

  • HAZOP team
  • full description of process
  • relevant guide words
  • conditions conducive to brainstorming
  • recording of meeting
  • follow up plan

HAZOP Worksheets:
The HAZOP work-sheets may be different depending on the scope of the study.
Generally the following entries (columns) are included:

  • Ref. no.
  • Guide-word
  • Deviation
  • Possible causes
  • Consequences
  • Safeguards
  • Actions required (or, recommendations)
  • Actions allocated to (follow-up responsibility)

HAZOP Pre-requisites:

As a basis for the HAZOP study the following information should be available:

  • Process flow diagrams
  • Piping and instrumentation diagrams (P&IDs)
  • Layout diagrams
  • Material safety data sheets
  • Provisional operating instructions
  • Heat and material balances
  • Equipment data sheets Start-up and emergency shut-down procedures

HAZOP Procedure:

Though there are no fixed approaches, following is a typical HAZOP procedure:

  1. Divide the system into sections (i.e., reactor, storage)
  2. Choose a study node (i.e., line, vessel, pump, operating instruction)
  3. Describe the design intent
  4. Select a process parameter
  5. Apply a guide-word
  6. Determine cause(s)
  7. Evaluate consequences/problems
  8. Recommend action: What? When? Who?
  9. Record information
  10. Repeat procedure (from step 2)

1

HAZOP Modes of Operation:

The following modes of plant operation should be considered for each node:

  • Normal operation
  • Reduced throughput operation
  • Routine start-up
  • Routine shutdown
  • Emergency shutdown
  • Commissioning
  • Special operating modes

A sample HAZOP process worksheet is illustrated in the below figure:

2

 

HAZOP Outline:

3

The key point here is that a HAZOP study must promote freethinking by the team members around each issue so that most possible problems can be identified. At the same time, the HAZOP company must impose enough discipline to keep the study moving along without wasting time on issues that are of no consequence.