Read Oracle Essentials Oracle Database 11g Online
Authors: Rick Greenwald
Tactical decision-making at the expense of long-term strategy
Although it may seem overly time-consuming at the start, you must keep in mind the long-term goals of your project, and your organization, throughout the design and implementation process. Failing to do so has two results: it delays the onset of problems, but it also increases the likelihood and severity of those problems.
Failure to leverage the experience of others
There’s nothing like learning from those who have succeeded on similar projects.
It’s almost as good to gain from the experience of others who have failed at similar tasks; at least you can avoid the mistakes that led to their failures.
Best Practices
|
249
Successful business intelligence projects require the continuous involvement of business analysts and users, sponsoring executives, and IT. Ignoring this often-repeated piece of advice is probably the single biggest cause of many of the most spectacular failures. Establishing this infrastructure has to produce a clear business benefit and an identifiable return on investment (ROI). Executives are key throughout the process because business intelligence coordination often crosses departmental boundaries, and funding likely comes from high levels.
Your business intelligence project should provide answers to business problems that are linked to key business initiatives. Ruthlessly eliminate any developments that take projects in another direction. The motivation behind the technology implementation schedule should be the desire to answer critical business questions. Positive ROI from the project should be demonstrated during the incremental building process.
Common Misconceptions
Having too simplistic a view during any part of the building process (a view that overlooks details) can lead to many problems. Here are just a few of the typical (and usually incorrect) assumptions people make in the process of implementing a business intelligence solution:
• Sources of data are clean and consistent.
• Someone in the organization understands what is in the source databases, the quality of the data, and where to find items of business interest.
• Extractions from operational sources can be built and discarded as needed, with no records left behind.
• Summary data is going to be adequate, and detailed data can be left out.
• IT has all the skills available to manage and develop all the necessary extraction routines, tune the database(s), maintain the systems and the network, and perform backups and recoveries in a reasonable time frame.
• Development is possible without continuous feedback and periodic prototyping involving analysts and possibly sponsoring executives.
• The warehouse won’t change over time, so “versioning” won’t be an issue.
• Analysts will have all the skills needed to make full use of the infrastructure or the business intelligence tools.
• IT can control what tools the analysts select and use.
• The number of users is known and predictable.
• The kinds of queries are known and predictable.
• Computer hardware is infinitely scalable, regardless of choices made.
250
|
Chapter 10: Oracle Data Warehousing and Business Intelligence
• If a business area builds a data mart or deploys an appliance independently, IT
won’t be asked to support it later.
• Consultants will be readily available in a pinch to solve last-minute problems.
• Metadata or master data is not important, and planning for it can be delayed.
Effective Strategy
Most software and implementation projects have difficulty meeting schedules.
Because of the complexity in business intelligence projects, they frequently take much longer than the initial schedule, and that is exactly what executives who need the information to make vital strategic decisions don’t want to hear! If youbuild in increments implementing working prototypes along the way, the project can begin showing positive return on investment, and changes in the subsequent schedule can be linked back to real business requirements, not just back to technical issues (which executives don’t ordinarily understand).
You must avoid scope creep and expectations throughout the project. When you receive recommended changes or additions from the business side, you must confirm that these changes provide an adequate return on investment or you will find yourself working long and hard on facets of the infrastructure without any real pay-off. The business reasoning must be part of the prioritization process; you must understand why trade-offs are made. If you run into departmental “turf wars” over the ownership of data, you’ll need to involve key executives for mediation and guidance.
The pressure of limited time and skills and immediate business needs sometimes leads to making tactical decisions in establishing a data warehouse at the expense of a long-term strategy. In spite of the pressures, you should create a long-term strategy at the beginning of the project and stick to it, or at least be aware of the consequences of modifying it. There should be just enough detail to prevent wasted efforts along the way, and the strategy should be flexible enough to take into account business acquisitions, mergers, and so on.
Your long-term strategy must embrace emerging trends, such as the need to meet compliance initiatives or the need for highly available solutions. The rate of change and the volume of products being introduced sometimes make it difficult to sort through what is real and what is hype. Most companies struggle with keeping up with the knowledge curve. Traditional sources of information include vendors, consultants, and data-processing industry consultants, each of which usually has a vested interest in selling something. The vendors want to sell products; the consultants want to sell skills they have “on the bench,” and IT industry analysts may be reselling their favorable reviews of vendors and consultants to those same vendors and consultants. Any single source can lead to wrong conclusions, but by talking to multiple sources, some consensus should emerge and provide answers to your questions.
Best Practices
|
251
The best place to gain insight is discussing business intelligence projects with other similar companies—at least at the working-prototype stage—at conferences. Finding workable solutions and establishing a set of contacts to network with in the future can make attendance at these conferences well worth the price—and can be more valuable than the topics presented in the standard sessions.
252
|
Chapter 10: Oracle Data Warehousing and Business Intelligence
Chapter 11
CHAPTER 11
Oracle and High Availability11
The data stored in your databases is one of your organization’s most valuable assets.
Protecting and providing timely access to this data when it is needed for business decisions is crucial for any Oracle site.
As a DBA, system administrator, or system architect, you’ll probably use a variety of techniques to ensure that your data is adequately protected from catastrophe. Of course, implementing proper backup operations is the foundation of any availability strategy, but there are other ways to avoid a variety of possible outages that could range from simple disk failures to a complete failure of your primary site.
Computer hardware is, by and large, extremely reliable, and that can tempt you to postpone thinking about disaster recovery and high availability. Most software is also very reliable, and the Oracle database protects the integrity of the data it holds even in the event of software failure. However, hardware and software will fail occasionally. The more components involved, the greater the likelihood of downtime at the worst time.
The difference between inconvenience and disaster is often the presence or absence of adequate recovery plans. This chapter should help you understand all of the options available when deploying Oracle so youcan choose the best approach for your site.
With Oracle, you can guarantee that your precious data is highly available by leveraging built-in capabilities such as instance recovery or options such as Real Application Clusters. However, equally important in deploying a high-availability solution is the implementation of the appropriate procedures to safeguard your data. This chapter covers these various aspects of high availability.
253
What Is High Availability?
Before we can begin a discussion of how to ensure a high level of availability for your data, you need to understand the exact meaning of the term
availability
.
Availability can mean different things for different organizations. For this discussion, we’ll consider a system to be available when it is
up
(meaning that the database can be accessed by users) and
working
(meaning that the database is delivering the expected functionality to business users at the expected performance).
Most businesses depend on data availability. More recently, accessibility to data via web-based solutions means that database failures can have an even more dramatic impact on business. Failures of such systems accessed by a wider community outside of company boundaries are, unfortunately, immediately and widely visible and can seriously impact a company’s financial health and image. Consider the web-based customer service provided by package shipping companies that enable customers to perform package tracking. As these customers come to depend on such service, interruptions in that service can cause these same customers to move to competitors.
Taking this a step further, consider complexities in accessing data that resides in multiple systems. Integrating multiple systems can increase chances of single failure and could cause access to an entire supply chain to be unavailable.
To implement databases that are highly available, you must design an infrastructure that can mitigate downtime, such as by deploying redundant hardware. You must also embrace techniques that allow recovery from disasters, such as by implementing appropriate backup routines.
Measuring and Planning Availability
Most organizations initially assume that they need data access 24/7, meaning that the system must be available 24 hours a day, 7 days a week. Quite often, this requirement is stated with little examination of the business functions the system must support. As the cost of technology components declines and reliability increases, many feel that achieving very high levels of availability should be simple and cheap.
Unfortunately, while many components are becoming cheaper and more reliable, component availability doesn’t equate to system availability. The complex layering of hardware and software in today’s two- and three-tier systems introduces multiple interdependencies and points of failure. Achieving very high levels of availability for a system with varied and interdependent components is not usually simple or inexpensive.
To provide some perspective, consider
Table 11-1,
which translates the percentage of system availability into days, minutes, and hours of annual downtime based on a 365-day year.
254
|
Chapter 11: Oracle and High Availability
Table 11-1. System availability
% availability
System downtime per year
Days
Hours
Minutes
95.000
18
6
0
96.000
14
14
24
97.000
10
23
48
98.000
7
7
12
99.000
3
16
36
99.500
1
20
48
99.900
0
9
46
99.990
0
1
53
99.999
0
0
5
Large-scale systems that achieve over 99 percent availability can cost millions of dollars to design and implement and can have correspondingly high ongoing operational costs. Marginal increases in availability can require large incremental investments in system components. Moving from 95 to 99 percent availability is likely to be costly, while moving from 99 to 99.99 percent will probably be costlier still.
Another key aspect of measuring availability is the definition of
when
the system must be available. A required availability of 99 percent of the time during normal working hours (e.g., from 8 a.m. to 5 p.m.) is very different from 99 percent availability based on a 24-hour day. In the same way that you must carefully define your required levels of availability, you must also consider the hours during which availability is measured. For example, a lot of companies take orders during “normal” business hours. The cost of an unavailable order-entry system is very high during the business day, but drops significantly after hours. Thus, scheduled downtime can make sense after hours that will, in turn, help reduce unplanned failures during business hours. Of course, in some multinational companies and the world of the Internet, a global reach implies that the business day never ends.
That initial requirement that a system be available 24/7 must be put in the context of the cost in deploying and maintaining such a system. An examination of the complexity and cost of very high availability will sometimes lead to compromises reducing goals and budgets for system availability.
The costs of achieving high availability are certainly justified in some cases. It might cost a brokerage house millions of dollars for each hour that key systems are down.
A less-demanding business, such as catalog sales, might lose only thousands of dollars an hour by using a less-efficient manual system that acts as a stopgap measure.
But, regardless of the cost of lost business opportunity, an unexpected loss of availability can cut into the productivity of employees and IT staff alike.
What Is High Availability?