Hello World

Hello World

debrucer No Comments

My 30+ career with Oracle is over. Officially retired now, AWS certified, and looking for some part-time, remote work, database or DevOps Think of me as Any Cloud, Any Database, not just Oracle.

The former MyOracleDatabase site was getting cluttered with categories, so I am moving off and will be posting to separate sites. Find me at AWSUGSanDiego.com, debrucer.com, IoT-sd.org and ToyotaSupraTurbo.com.

This site may be re-purposed with a tool named OraLogs… and I will run an Oracle instance somewhere to create demo logs. For now, the files are static on a site I’ve operated for many years. Updating this tool from 2007 to Oracle12c in the short-term future.

David

Tools and Database Tricks

David Russell No Comments

I must tell you up front, I am not a fan of anything 3rd party if the 1st party can do it. Tools must truly be value-added propositions, or they don’t fly with me. In our users group presentation last night, I found myself wanting desperately to be working as the DBA at “that company” which was described as having all sorts of problems that Hexatier would solve. The situation described that made this tool the solution would not have happened with my database. True, at least to a degree. Some of the speaker’s recent experience described a DBA and a couple of developers as people who needed to be slapped. In his example, the DBA was at fault for not restricting the access. We’re talking about a product that is HIPAA-compliant here, so the lack of procedures, rules, policies, basically, lack of control, to keep that from happening, would make it NOT compliant. In this respect, the tool is fixing a problem that does not exist.

I am also an advocate for small business and while I do my best to follow the law, I do not fully accept the copyright, patent and trademark systems (laws) as entirely proper. They’re law. I inherently object to a patented implementation of a common idea as something I’m willing to buy in to… unless it meets the first rule, and adds value. Otherwise, I would have to look at what it does, how it does it, and do it myself; hopefully, avoiding any copyright infringement.

I mean to say, that’s the system. These common ideas are captured in a legal format that means “don’t use them”; but, you can’t win in business if you don’t, so you use them. Nothing happens if you’re mildly successful, or at least privately successful; but, you make it in a big way, and you’re sued into oblivion; whether you knew of the patent, or not. The laws stifle individual creativity.

That said, the visual encryption by Hexatier of data is interesting. Whether it is sufficient, or still requires other steps of encryption beneath, and the related key management requirements, remains to be seen. This could be value-added.

The times required to move a system forward into an encrypted environment was estimated at 12 to 18 months. The time to back out, if the solution were found unacceptable, was another six months.

With Hexatier’s “dynamic” encryption, and the ability to back out instantly, that is definitely a leg up… if it is truly considered to be HIPAA-compliant solution. Again? Value added?

Then, there is their scan tool. It finds propagated sensitive data, and reports it. This sounds pretty slick, down to and including the ability not only to distinguish the format of a number (e.g., 012-34-5678 as an SSN); but, also whether it is a valid SSN, or a bogus one.

Clearly, I do not understand the math that pulls that one off without a lookup somewhere. So, the Hexatier tool does some neat things, and I might want to see some of them in operation. The product is available in the AWS Marketplace, and there is a check box for a trial offer.

The Hexatier product was worth looking at and understanding. I am concerned about DB rules, tool rules, and CloudFormation rules, and how and where to use them, and for that matter, who should use them. A lot of this is defined in the big HIPAA picture, the separation of duties, for instance, is not up to the DBA to establish. It is part of a statement in a document and practiced procedures which have been audited and will be audited again regularly.

Hopefully, your DBA is nowhere near as bad as in the example given; but, I question if the DBA that got you here should be the one implementing this tool, or your database, for that matter.

Hire me for four to six months to straighten things out. Then we can talk about tools which may or may not be needed.

David

 

An Inherited Oracle Instance

David Russell No Comments

What do you do when you become responsible for an Oracle database that has been running for three years, accumulating nobody really knows what, other than it’s all very important.

You have the Window 2008 r2 server administrator password, and the Oracle “system” password. You are the administrator of something else, and now, this is yours. You have little additional information. You might only have “system” because you found it embedded in a couple of scripts. That’s done more often than with the “sys” account credentials.

There’s a ton of stuff to learn with regard to schemas, exports and backups; but, you must first get connected properly. Here is what follows with your system user password:

 

The syntax “connect / as sysdba” is familiar with the slash really representing the separator between the user ID (schema name) and the password. Both of these values are implied because of the connection.

The external password file shows the only user who had Oracle “sysdba” privilege is the Oracle user “sys”. You have “system”. System is not in the password file, and typically, users are granted privileges with or without the ability to pass them on. They don’t have ability to pass them on without owning them… one would think.

One might be wrong in this case. The connection above was wrong without sysdba permissions. Look at the following login, no slash…

 

No slash, and oh, no password, just a return push. The particular case of logging on from system, and needing sys, to grant sysdba to a new, dba user, in order to user RMAN to do a backup, or any other maintenance for that matter, with no other documentation.

The emerg_dba was created with connect, resource and dba; but, could not have sysdba without it being granted by “the” only existing sysdba. The way the connection appears when actually connected as sysdba is that you are “sys”, it doesn’t matter that you used no password when prompted, it knows who you are.

Now, connected this way, give sysdba to your emerg_dba user, as follows:

 

Go off. Get your backup and test it somewhere. This account has just become your Oracle DBA account. You really should not use the sys or system accounts for 99.9% of the things you might be doing anyway. True fact, you might want to create another account for yourself for day-to-day operations. You don’t need to have “dba” to do non-related, non-dba work. Oh, and change your password from Password 🙂

Keep your eyes open for clear text passwords for “sys” or any other user, for that matter. Make sure accounts for features are locked down with non-default passwords.

ORA-01031? Don’t you necessarily believe it!

It took a lot of effort to get to this simple conclusion. Don’t always accept things when told it can’t be done. Sometimes it’s as simple as not applying logic and not accepting an answer because it was logical. It was a mistake.

It was as simple as “connect / as sysdba” was wrong, “sys as sysdba” allowed full access to everything you needed to do.

Several years ago, I wrote a page about the distinctions of who owns various components of Oracle. That post may be found here. With this new understanding that the Oracle “system” account can modify the “sys” account (and therefore, password, access, code, etc.), those boundaries have been crossed.

Don’t use sys and system, and refrain from using dba permissions. Make sure you can use them, then don’t.

Building an Oracle Image on AWS EC2

David Russell No Comments

The effort behind the AWS button push does not appear accidentally. RDS makes it appear so simple. The only reason for me to build on EC2 is the ability to shut down certain configurations depending on the project.

This post documents what it takes to build a fresh image of an operational, usable instance of Oracle 12c plus Enterprise Manager on a real Oracle operating system (not available on RDS) on an AWS EC2 instance.

Ideally, the steps taken in this document would be automated and as readily available as on RDS. Nobody said anything was ideal. So, yes, all of this should be automated, and this become a button push. For future Oracle12c installations, the Amazon Machine Image (AMI) we create will skip this work.

Oracle database administrators are responsible for installing the product on the host provided by the employer or customer. When local hardware was the only choice, it boiled down to spec, order and wait, or install on an already crowded machine. The job could take days, and in some cases, weeks.

AWS changes all of that with Oracle on RDS. Put licensing issues aside and a full compliment of Oracle may be installed on Linux in less than an hour. The installation includes a version of Cloud something (EM) Express. It is full featured on RDS if you license the management packs. Installing the full version of EM is a separate post to come. For this installation, we will install Express.

Relational Database Service (RDS) still uses a pay for what you use model; however, it cannot be turned off. In order to stop the billing, you must delete the instance.

With the same database built on an EC2 host, when you turn it off the billing stops.

You will then only be paying for storage and any elastic IPs you want to maintain. I spent several weeks two months ago trying to find something that I could not do on RDS and was unsuccessful.

No longer having to do backups and patches is worth a bit of extra expense for RDS.

RDS is great, there’s no question about that. I will use and recommend it where appropriate; however, today, I need an instance with an Oracle operating system.

I want the best for this image. That is Oracle12c on Unbreakable Linux.

Here’s how to do it…

Open the AWS EC2 Dashboard and Launch an Instance

Launch Instance

Launch Instance

 

Searching for Oracle Linux brought back a list and this is the one I selected

This AMI (Amazon Machine Image) has a separate license fee of 6 cents an hour… per hour used. When it is down, it does not cost you.

There are other places to obtain your OS; but, this one comes with patches and some amount of support from the vendor. It also has flaws which we will get into later.

Step 2. My favorite instance type is m3.medium since it is approximately what I would have purchased at home. Obviously, pick what you need. Remember that you can change it later in a matter of minutes.

Probably the biggest commitment will be the storage and the subsequent cost of storage of snapshots and backups. Next we will provide configuration details. The Network and Subnet fields need to be set… and obviously, you will need to have these items built already, or build them. They exist in my account, so let’s go.

It is also important to establish any IAM roles ahead of time and if, like in this example, you want to include the rules for ec2-s3, it must be done now. Rules cannot be added to an existing instance later.

Do Not Forget to Add Roles...

Do Not Forget to Add Roles…

Step 4: Add Storage. A trick taught by Kevin Epstein of the LA Users Group was that adjusting the size from whatever default to 100 increases the IOPS numbers…

in this example, from 45/3000 to 300/3000

However, when you go to use the additional storage, it is not partitioned or formatted.

Under traditional circumstances today this is not a big deal. One does not get very far into being a Linux administrator without learning how to detach drives and re-attach them to a second instance to be fixed… whatever the fix is.

In this case, the fix is partitioning, and then extending of the file system. AWS site gives step by step instructions on how to proceed. The attachment is to be done while the second instance is running. The error message says it cannot be attached because the instance is running.

For the record, while it can be attached to a stopped instance, in this particular case, it was not bootable. AWS documentation warns you against booting while another bootable disk is attached… and for good reason.

I have spent way too much time on this aspect as I typically refuse to accept that something cannot be done; but, in this case, it was a long, waste of my time. The topic will come up again, shortly.

An alternate solution is best for me. My customers are of the type who will not allow their data to be unencrypted and the primary disk on an EC2 instance cannot be encrypted. I opted for the following solution which includes 100 GB of encrypted storage with the increased IOPS burst.

The image above is the configuration. The first device is properly formatted with a matching file system. The second device is encrypted. The keys required for encryption are automatically provided by AWS. There is no wasted space that I cannot get to… paid for needlessly.

For my next AMI, I will use three devices. This way, the first device can be the default 15 G. The second can be big enough for Oracle. The third device can be added later after I determine the required DB size. This way, the stored AMI can be smaller.

The image below is the before and after… one disk, vs. two.

On the top notice that xvda is 100G, and the partitioned device is only 15G. Also notice on the second host that the full device is partitioned.

one disk vs. two

one disk vs. two

I was over it. Then came time to mount the disk permanently. That requires an entry in the Linux OS file named fstab. This is where you mess up and the system will not come back. This is probably the number one place to learn about detaching and reattaching disks. Since it cannot be done with a Marketplace image… restore!

A sane pre-requisite to this entire post is to back up your system. I will admit to restoring it three times… with the second device properly attached the 3rd time.

Once we are up and running with an Oracle Linux OS and some storage here are some other simple tasks completed:

  • create users and groups required for Oracle operation
  • create minimal structure for Oracle with permissions, as required
  • entry in /etc/hosts for this server
  • set time zone
  • obtain software for the Oracle 12 enterprise database & enterprise manager
  • upload software
  • unzip software into a staging area
  • install the latest sqldeveloper (on your pc)
  • configure putty and Xming (X11) server (also on pc)
  • install Oracle… database first

It takes X11 to use the Oracle installer. X11 is very slow and you should resist the temptation to click ahead while in the installer. The password setting page and the global name fields are particularly awful to deal with in X.

Silent mode is used after you have perfected things. RDS uses silent mode. Because this is an Oracle OS distribution, I was expecting things to be a lot more “ready” for Oracle. It took considerably more work than expected. It is recorded now.

Snapshots along the way, and restores as problems were addressed to cleanly test the change, were necessary to get this gold image. This way one does not induce tools and unnecessary components on the image that is produced at the end. No unnecessary packages. No Adobe or third-party stuff here. Oracle Linux and Oracle only.

This is Oracle12c on Unbreakable Linux. There will be around 40 screens to follow. Each has a title and flyover (often the same words)… minimal comments, if I can….

Here is how to install Oracle12c on the AWS instance built above:

Notice in this example, the SID, UNQNAME and HOSTNAME are not the same values that I choose when I built the instance. dbhome_1 should not exist on a fresh host… if it does, the next available number will be used by the installation.

Having these values right now will save having to manually enter them later. The defaults will be based on what you have set here, and you may modify them at that time… from these defaults.

Set your bash profile

Set your bash profile

Execute the following statement as root to meet prerequisites:

yum install oracle-rdbms-server-12cR1-preinstall

Output of that command:

Dependencies Resolved

Dependencies Resolved

Following depends on where you unzipped the OTN files…

Start the installer with an ampersand to run in background

Start the installer with an ampersand to run in background

 

Do NOT wish to receive.... fails if checked

Do NOT wish to receive…. fails if checked

 

Answer Yes to remain uninformed

Answer Yes to remain uninformed

 

Create and Configure a Database

Create and Configure a Database

Change the following from desktop to server class…

Select Server Class

Select Server Class

We are not installing the grid software. Shared storage does not happen on AWS.

select single database installation

select single database installation

Advance install is required here…

Select Advanced Install

Select Advanced Install

 

Select your language - English here...

Select your language – English here…

This represents a change in the OTN distribution. This is an Enterprise only edition with a group of advanced (extra cost) features. Oracle Standard Edition and Standard Edition One are licensing restrictions… and have previously shared the same distribution package.

Enterprise Edition is the only choice...

Enterprise Edition is the only choice…

 

Default locations if all your environment variables are set right

Default locations if all your environment variables are set right

All of this can be changed later…

General Purpose

General Purpose

I should have used an AWS defined domain instead of .world. It will have to be corrected before EM Express can be used on the network.

The global database name field is particularly hard to set using X11.

Global Name, SID and deselect container DB choice

Global Name, SID and deselect container DB choice

More memory would be nice. 1506 is not 40% of what is available and it cannot be adjusted beyond 1882. This is acceptable for a desktop. Not for a server.

Specify Configuration Options - Memory

Specify Configuration Options – Memory

The default here is a terrible choice. Use AL32UTF8. While the default is acceptable, it is not compatible with a lot of US built databases. Internationalization may change all that; but, the default has never been a good choice here.

Specify Configuration Options - Character Set

Specify Configuration Options – Character Set

If this were a production server these would never be installed. I have a particular purpose and these schemas are my main reason for building this image.

Specify Configuration Options - Sample Schemas

Specify Configuration Options – Sample Schemas

Express comes by default… use it here.

Express or full Enterprise Manager?

Express or full Enterprise Manager?

We do not want ASM any more than we wanted the grid software above…

Enable Recovery - Default location if environment variables set

Enable Recovery – Default location if environment variables set

We did not initially create an operator group… leave blank or assign to dba. Do not assign to oracle even if it is in the drop down choices.

Privileged Operating System Groups

Privileged Operating System Groups

 

Dependencies Resolved

Dependencies Resolved

 

Summary of Components to be Installed Next

Summary of Components to be Installed Next

 

Product is being installed

Product is being installed

 

Product is being installed - looks close...

Product is being installed – looks close…

 

Separate windows installing clone database (examples)

Separate windows installing clone database (examples)

 

The string for the URL here should have been used above instead of .world.

Configuration Info - EM Express's URL, too

Configuration Info – EM Express’s URL, too

Nothing to change here, just showing you the users which are installed…

only two of forty accounts unlocked

only two of forty accounts unlocked

 

Install at 100%

Install at 100%

 

Success!

Success!

Now, let’s make that final golden image of Oracle12c on Unbreakable Linux.

Here is the freshly created instance... down.

Here is the freshly created instance… down.

 

Action. Image. Create Image...

Action. Image. Create Image…

I was wrong when I wrote the Image description in this shot… it cannot be changed.

 

Specify New Image Details Here

Specify New Image Details Here

 

Your image is being built now

Your image is being built now

 

AMI page while building... watch here and on snapshots page

AMI page while building… watch here and on snapshots page

 

 

Enter your own info for name field

Enter your own info for name field

 

Launch the new image...

Launch the new image…

It is not safe to delete the old instance until the new instance created by the new AMI is tested. You definitely want to check out the newly launched host (instance) and add labels on the storage page and anywhere else they are not consistent. Those steps are complete on my VPC. You now have a golden image of Oracle12c on Unbreakable Linux.

Don’t forget to update /etc/hosts as root on the instance built by your gold image of Oracle12c on Unbreakable Linux. Snapshots stay with the new AMI. Delete any other snapshots, volumes or instances that are now no longer needed.

Oracle PULA is Real but Why?

David Russell No Comments

It is not always clear what rumors about Oracle licensing are true. With regard to the new Perpetual User License Agreement (PULA), some you hear is probably true. The best advice I keep in mind is that anything which was ever once negotiable remains negotiable. Oracle prices are not set in stone (or gold) as Oracle would like them to be.

The existing ULA spans one thru five years. At the end of the ULA the customer certifies their usage to Oracle and pays for any extra used. This means that if your license was for some number of Standard Editions, and you used twice that number of Enterprise Editions, plus extra cost options and the various packs only available in Enterprise Edition, that you pay for it all at the end.

Oracle database licenses are purchased with support. It is 22% per year, each year. Licenses have been cancelled when support was not paid, so it is mandatory. With My Oracle Support (MOS) comes connectivity back to Oracle for diagnostic reasons. While connectivity is not mandatory, it is obviosly beneficial, unless you’re stealing Oracle. They have the right to come in and inventory it, too.

Let the customer use options and features. If they use it, they will pay. There are no keys to Oracle code. Go ahead. Install it all! Use extra cost advanced features. Forget as a developer that these features multiply license costs. Embed that code.

The new PULA removes the time requirement, instead it is priced as a yearly fee according to an estimated usage. It is speculation to think that estimated usage removes the bump at the end. There is no indication that prices are going to come down. No price reduction has been announced. It is possibly a move to clarify; but, it still has flaws.

Oracle has twelve commercially available price lists today. Revenue from database licenses is down while application license revenue is up. Oracle has no incentive in making the database cheaper, and to my knowledge, the new pricing does not do so.

Oracle is expensive. Their licenses are confusing on top of complex. Legal departments don’t always agree on interpretations. The new PULA attempts to address some issues and it raises others. It will remain complicated.

I believe this is an attempt to tighten up the prices and guarantee that Oracle databases are used across the board… making it harder to leave Oracle for other products. Under the skin, it is harder for Oracle to track history with virtual databases in any one of many clouds. They exist. They multiply. They disappear.

Agreeing to a flat rate might resolve things in Oracle’s eyes; but paying for more than you need shouldn’t be the agenda. And don’t think for a minute that it will eliminate the fact that if you cheat, you will pay.

Amazon has a different price philosophy. See what I wrote on that in 2013 here.

I have written several posts on Oracle, Options, Packs, licenses in general, and I will be writing more. The price differences between Oracle SE on AWS and Oracle SE from Oracle are not representative of a true value.

The real test case will be AWS RDS license included for Oracle Enterprise Edition. In my eyes, the only Oracle database to own.

Oracle to PostgreSQL

David Russell No Comments

After spending four months tuning an 11g instance and proprietary application SQL using Oracle tools and the Oracle Wait Interface (OWI) I am a big fan of Oracle Enterprise Edition.

Oracle Standard Edition does not contain license to internal data collected by the kernel and used by the OWI. This AWR and ASH data licensing is included in Enterprise Edition only.

With Enterprise Edition and the tuning and performance packs, one can learn that their database does not slow down. It runs at 100% all of the time, or it waits. Clear identification of the waits allows a solution with excellent accuracy. No third party tools are required, in fact, they do not work well.

In order to tune Standard Edition we use 30 year old techniques which are not effective.

Oracle Enterprise Edition is the best there is… Standard Edition is a waste of time and money.

If you use Oracle Standard Edition, depending on your use case, you may be a candidate for migration off Oracle to PostgreSQL.

Oracle Apps need Oracle databases, so that is a roadblock.

Existing licenses with Oracle may take time to reduce. Fortunately, an Oracle DBA is always looking for spare licenses.

With those road blocks understood, let’s talk about the path from Oracle to PostgreSQL,

Proposed migration will be done using Amazon Web Services (AWS) as the target.

While PostgreSQL is Open Source software, distributed for free, the version offered by EnterpriseDB has extensive customization for Oracle… and comes with a cost.

Check out EnterpriseDB (http://www.enterprisedb.com/)

The current per socket cost is $6900/year, 1 or 3 year terms, support included, 24/7 with one hour initial response time. No discounts are available.

This verses Oracle’s per (core * multiplier) cost of $17,500/year, 5 year term, plus 22% annual maintenance/support. Companies receive up to 35% discounts on Oracle from VARs.

That’s $11,375 per that formula, vs. $6900 per socket regardless of core count.

The Enterprise DB solution is generally accepted by corporations because of the support and reputation. Several alternatives are possible including using free Open Source software and manually converting Oracle PL/SQL code to PL/pgSQL.

This part will be tricky since Oracle has Packages, Procedures and Functions, and PostgreSQL only has functions. There’s a community that has been working on this for years. One Open Source conversion program is over 10,000 lines of Perl script.

My recommendation is for a six week trial and evaluation of moving YOUR Oracle database to PostgreSQL with a decision as to how to proceed (AWS EC2, or Enterprise DB/AWS/RDS) at that time.

Not interested in a migration to PostgreSQL? We can also move your existing Oracle instance to the cloud, with or without a version upgrade. We’re always interested in tuning Oracle Enterprise Edition databases.

Check out my post on Tuning here.

David

Old, Slow and Meticulous…

David Russell No Comments

that’s me when it comes to tuning your Oracle database.

Tuning an Oracle database is nothing more than configuring the system and program global areas and locking them into memory. This is relatively simple on Solaris and most Linux distributions.

Oracle has been building in tuning features for thirty years. They have succeeded with version 11g, and improved on it in Oracle12c Enterprise Editions (only). Not all features are turned on by default, or design. Several things must be checked with possible configuration changes in order to allow them to work. Kernel settings and swap type, size and location, are also validated. It still takes an expert to turn some of them on,.. and it may require restarts, or a prototype, to test production systems.

An Oracle database, properly installed and configured, works as fast as it is allowed under all circumstances. For those occasions when it does not, we use the Oracle Wait Interface (OWI) to determine why not. Your database does not “slow down”, it runs full speed, or it “waits”. When it waits, there is always an identifiable fix. If it waits too long and too often, it sure seems to slow down.

One client was using Oracle label security (OLS) and we found that row level security was averaging  between 25 and 33% of all available CPU. It was found taking 93 minutes of CPU time in one three hour block. We reduced this to 14 minutes by using the OWI to create and save optimal execution plans.

In addition to tuning your Oracle database you must tune the SQL code, too. Oracle can tune itself and all application SQL including that in Java and 3rd party applications. It can do this automatically, and it can provide all the documentation required on the execution plans. In 12c, It can now even change a plan during execution. It takes set up, and monitoring; but, once you see what it does, you won’t stop it.

Let me tune your Oracle Enterprise database and configure it to tune itself.

Whether you need to migrate earlier versions of Oracle to Oracle12c, or simply move your Oracle database to the cloud, I have experience doing both. My recent experience is with Amazon Web Services (AWS), using both EC2 and RDS instances.

Migrations of any Oracle edition can be accomplished as well as several high availability options. Tuning Oracle instances and 3rd party applications requires Enterprise Edition. Any instances that I create will be done using Oracle Technical Network (OTN) Developer’s Licenses. You will be responsible for any required production licenses.

If you are thinking about leaving Oracle, please check out my post on migrating to PostgreSQL

David

What About Data Guard Experience?

David Russell No Comments

Oracle Data Guard and Active Data Guard are basically GUIs used to accomplish some form of data replication. I have been doing data replication with Oracle since 1990 where I built a rapid prototype, integrated, distributed database system between Fort Lee (Oracle/PC) and Fort Monroe (SQL DS/4381).

Designed, coded, and demonstrated product in four weeks.

Since 2008 I have built and maintained several log-shipping instances in Standard Edition. In addition to running one failover site, we also ran a disaster recovery built the same way. The failover was a manual operation and took about 15 minutes to complete with less than two minutes of customer down time. It worked identically to either site: failover or DR.

Standard Edition does not come with Data Guard. Oracle SE licenses are much less expensive.

Additionally, I have implemented true Master/Master replication for five, six and eighteen nodes.

While I do not have Data Guard experience, I have no qualm in using it in some future endeavor.

No RAC Here; but,

David Russell No Comments

As an independent contractor working with Oracle databases, I frequently am asked about RAC experience. Here is my analysis of the product written in 2004 when Oracle went fro “i” to “g”.

As contract DBA for Guidance Software, I accomplished the forensic preservation of the Oracle database of the infamous 77 Million Users Data Breach. Under direction of the client’s legal office, the entire cage in the data center, several rows of equipment, were quarantined and cutoff from the network, and preserved prior to final analysis. I spent four months in that cage: 65 degrees.

The Oracle DB was sitting there on a downed, two node cluster with data residing on raw devices in a SAN.

The legal office wanted Oracle brought up with no chance of triggers firing or any other changes on the startup of the host/instance/database. That proved hard to do for a number of reasons.

The database had been running for six years through several versions of Oracle software. There were pieces left from rolling upgrades from versions eight through ten and the active Oracle11gR2 instance. Oracle homes everywhere, and oddities like a listener that had been working before hacked, which was located in a folder that had been renamed, also before.

I installed a NAS device and backed up over 7 terabytes of data then defined and demonstrated recovery with RMAN.

Several class action suits were anticipated. As of 2015, there are five on the books. My findings and my notebook were turned in with the database after four months of investigation. It’s about time for someone to call me for a restore 🙂

This is not exactly what I would call RAC experience as the commands used during this task crossed several versions and involved misconfiguration as well as having been hacked and down. I am not familiar with the normal, day-to-day operation of a RAC environment. I understand the theory… and cost.

The product prior to RAC was called OPS, Oracle Parallel Server, and I was the lead DBA for two installations. Again, never really worked with it day-to-day. I also implemented a Veritas cluster which used SAN features to make duplicate copies of the DB available in development and test. Less than 60 seconds of downtime was required to refresh either environment, on request.

A short learning curve will be required to use RAC; but, that’s one of the things that makes life interesting. Without a challenge, I probably would not want to do it again.

Install of Oracle Enterprise Manager 12c – Failed

David Russell No Comments

To see why it failed, see my post here.

After obtaining the required files for the 64-bit Windows installation they were unzipped into a staging area. In an effort to rule out a “null pointer” Java error, a common, catch-all error, they were unzipped a second time in sequential order (i.e., zip1, zip2, zip3)… it didn’t help.

The first error said that the target DB for the Oracle Management Server (OMS) needed to be cleaned up.

 

The clean-up got progressively more detailed in the first few attempts. After about five of those clean ups, a new machine image was in order. That way, a fresh installation can be started in 20 minutes, vs. a couple of hours.

Clean up was also done on the Windows side; however, possibly, not all required. Log files on Windows were in three basic locations: C:\Program Files\Oracle\inventory\logs, C:\Users\<username>\AppData\Local\Temp and also in C:\Oracle\em\middleware\logs.

Enterprise Manager did require clean up; but, when the clean up was performed, it failed, as follows:

 

We must not have gotten that far along.

The next error wasn’t the listener. It was the wrong IP address in the Linux /etc/hosts file. Just sort of crazy not to have that worked out right from the beginning.

When I am installing an Oracle database I am careful about Editions and options. Enterprise Edition does not automatically come with the extra cost options, yet, a default install loads (and links) them to Oracle. Some of these extra cost options are used internally by Oracle in lower Editions; but, that’s their problem. If I am not doubling my license cost by buying, let’s say, partitioning, I am not going to install partitioning.

When creating an OMS, it turns out, partitioning is required, as per the following screen shot:

 

The public synonyms were for the MGMT user… not properly deleted when doing the clean up. The recommendation was misleading, as there are over 7000 public synonyms in Oracle… there were 322 for the MGMT user.

Oracle does not always execute “cascade”, “including data files”, “including contents” qualifiers for command syntax. It’s always best to check… did it do what I think I asked it to do?

Re-linking the Oracle binaries to include the partitioning option required the DB to be shut down, two commands executed, and a restart. As it turned out, there were four more pieces of optional software that were required by WebLogic for the Oracle EM 12c installation.

The next four warnings could have been fixed immediately, or as I did, left until the end. The message follows here:

 

I have not investigated the values yet. It will be interesting to see if they fit within the general constraints of my system. Changing redo log size is not trivial though… Let’s get it installed first, and worry about it later.

The next screen was encouraging… showing the ports that the software was going to be using, as follows:

 

Note that this is an Oracle installation of Oracle products including the application server, WebLogic… yet, the ports identified above do not also spell out the WebLogic used ports.

On AWS I used security groups within the Amazon console. On the individual instances (hosts) I use iptables.

Be sure that these ports, and any other that you identify, are not blocked by such groups or rules.

The first big failure came next, at 40% completion, as follows:

 

A quick Bing search said the “40%” failure was anti-virus, and sure enough. It was. Now, with AV disabled, it continued (well, after a clean up and do over)…

Of course, now that AV is turned off, don’t Google for answers to Oracle errors… there are a LOT of Russian sites where Oracle can be found… a LOT of viruses, too.

Step 11 of 13, we’re almost there! LOL (I said that to my son so many times… I’m not superstitious; but, he says I jinxed myself)… almost there… the configuration about to be installed, as follows:

 

WebLogic kept creating a file in the top level directory on my laptop. I kept getting the following message:

 

I do not suppress such checks. In the very beginning, I tried to install OEM in the Program Files directory; but, Oracle does not like that location. It can be escaped properly and used; but, it is usually problematic at some point. The Oracle inventory and some log files are there; but, it’s easier to just not do it.

So I thought that WebLogic was throwing up from something in that first failed install. It is a valuable file to review. Hopefully, in a final installation, it will no longer be created, you know, outside of where it is supposed to be.

The next screen is as far as it goes, install wise, 52%, as follows:

 

The “View Log” link stays active the whole time. It’s pretty easy to follow what is going on here; however, once at 52%, it just sits for hours. Here is a snap shot at 51%, nothing bad has happened yet.

 

That last line contains a process ID… I was never able to find such process. I looked in the OS. I looked in Oracle. It wasn’t until much later that I figured out the WebLogic console, the ports, the user IDs and passwords.

This is a test installation. I used the same password everywhere and it shows in some of these screens. Rest assured, I am not so sloppy in real life; but, for here, the password is “Sc13nt1st”… and I must have typed it 250 times during 25 installation attempts.

I was a bit surprised to find the WebLogic password was “welcome1”. Come on Oracle! Geez!

Somewhere along the line I found out my recovery area was filling up. If you delete archive logs from Oracle at the command line, you still have to go into RMAN to recover the space. To my knowledge the installation never stopped because of this; but, I increased by recovery area to 10 Gb. No more question there.

 

Once the “process” at 52% has run for an hour or more it starts to repeat the same message:

Info: oracle.sysman.top.oms: Still running….

I counted over 15,000 of these info messages after a three hour run (while I slept).

At that point, if you stop the database and restart it, the WebLogic installation continues… I mean, at least it picks up the database connection and display the appropriate log messages… it does not move from 52% though.

If you cancel instead of stopping and starting the database, the installer GUI disappears.

Various messages along the way are informative; but, none lead to the eventual installation. I was able to identify an important directory on the Oracle 11g DB side… /var/opt/oracle/oraInst.loc was missing. This file is located in $ORACLE_HOME by default; but, for some reason, something required it to be elsewhere.

Connect to the host, su to root, md oracle, chmod, chown, and it’s all fixed. Even after doing so though, the installation logs still said the file was missing. It’s not hard to see it getting confused. With the 10 plugins that I selected to install, there were a total of 33 different “Oracle Home”s.

I suspect it was confused more than once. (So was I, btw)

The only “failure” message that appeared in any of the logs was the failure created by the DB restart… and the WebLogic installation continued after that…

 

The next message threw me for a curve… I was never able to find a table or view by this name.

 

It did make me think about another database being out there… I found the WebLogic Server Console; but, still, never found this table. Even after linking the other four extra cost options back in… never found it.

The next snapshot shows an Oracle error at the correct time for the failure… But, all it shows, really, is the statement “SEVERE: Failed Parameter Validation”… that could be as innocent as a mis-typed password.

 

Here is the “still running” message that follows each failure:

 

The following “clean up” command worked for me, of course, substituting my values for those in the three snap shots below. Please excuse that there are three; but, at least the whole command is present… if you need the info, it’s here.

 

> scroll right

 

 

> one more time for third piece of command:

 

 

Close, but no cigar… status 0 sounds encouraging; but, it failed, that process I couldn’t find, doesn’t exist now, for sure.

 

All of that over five days, twenty-five installs, two or three new AMIs. My customer pays for the Amazon resources; but, I do not bill for failed installations unless it was the customer’s fault. This was my fault, or Oracle’s Fault, or ??? Not the customer’s fault.

Remember, you can check out why it failed here.

Thank you Oracle!

Recent Posts

David is a Facilitator for AWS Users Group of San Diego and holds Solutions Architect Certificate AWS-ASA-16565. Retired and working with Arduino microcontrollers today, seeking ten +/- hours a week remote DB or DevOps work. Find me on LinkedIn as "debrucer" https://www.linkedin.com/in/debrucer/