Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. Introduction to the Oracle Solaris Cluster Environment. Oracle Solaris Cluster System Hardware and Software Components. Oracle Solaris Cluster is a comprehensive high availability and disaster . storage/solaris-cluster/overview/oracle-solaris-cluster-whatsnewpdf).

Author:HUNG MUSTARD
Language:English, Spanish, German
Country:Mexico
Genre:Children & Youth
Pages:589
Published (Last):22.04.2016
ISBN:328-9-33197-480-4
Distribution:Free* [*Register to download]
Uploaded by: DAINE

63717 downloads 179380 Views 34.75MB PDF Size Report


Sun Cluster Pdf

Oracle Solaris Cluster offers the best high availability platform for. Oracle Solaris medical-site.info Cluster topologies). White Paper Sun™ Cluster Software — Quality by Design for Advanced Availability Why the Improved Quality of Sun Cluster Software Delivers Higher . Sun Cluster Cheat Sheet. This cheatsheet contains common commands and information for both Sun Cluster and , there is some missing information and.

Versions 3. It is also used to run cluster commands remotely like the cluster shutdown command. This daemon registers with failfastd so that a failfast device driver will panic the kernel if this daemon is killed and not restarted in 30 seconds. It is automatically restarted if it is stopped. There is also a protocol whereby user applications can register themselves to receive cluster events.

At the bottom of the installation guide I listed the daemons and processing running after a fresh install, now is the time explain what these processes do, I have managed to obtain informtion on most of them but still looking for others. Disk path monitoring daemon monitors the status of disk paths, so that they can be reported in the output of the cldev status command. It is automatically restarted if it is stopped.

Multi-threaded DPM daemon runs on each node. It is automatically started by an rc script when a node boots. Automatically restarted by rpc.

In version 3. You can use the scsetup 3. It is also used to run cluster commands remotely like the cluster shutdown command. This daemon registers with failfastd so that a failfast device driver will panic the kernel if this daemon is killed and not restarted in 30 seconds.

This daemon provides access from userland management applications to the CCR.

The precise size of slice 7 depends on the disk geometry, but it will be no less than 4 Mbytes, and probably closer to 6 Mbytes depending on where the cylinder boundaries lie.

Note — The minimal size for slice seven will likely change in the future, based on a variety of factors, including the size of the state database replica and information to be stored in the state database replica.

A small portion of each drive is reserved in slice 7 for use by Solaris Volume Manager software. The remainder of the space on each drive is placed into slice 0.

Any existing data on the disks is lost by repartitioning. After you add a drive to a disk set, you may repartition it as necessary, with the exception that slice 7 is not altered in any way. To accomplish this, you have to identify the diskset or diskgroup to the cluster.

For each device disk, tape, Solaris Volume Manager diskset, or VxVM diskgroup that should be managed by the cluster, ensure that there is an entry in the cluster configuration repository.

Exam Guide Pdf for exam 310-630 EDS CERTIFIED SUN CLUSTER 3.0 ADMINISTRATOR

When a diskset or diskgroup is known to the cluster, it is referred to as a device group. A device group is an entry in the cluster repository that defines extra properties for the diskgroup or diskset. This effectively means that when all cluster nodes are booted at the same time, the diskset is taken by its preferred node.

If the preferred node joins the cluster later, it will become the owner of the diskset that is, the diskset will switch from the node that currently owns it to the preferred node. When you create or delete a diskset with Solaris Volume Manager software commands, the cluster framework is automatically notified that it should create or delete a corresponding entry in the Cluster Configuration repository.

You can also manually change the preferred node and failback policy with standard cluster interfaces. Using Mediators to Manage Replica Quorum Votes Disksets have their own replicas, which are added to a disk when the disk is put into the diskset, provided that the maximum number of replicas has not been exceeded It is possible to manually administer these replicas through the metadb command, but generally this is not required.

The need to do so is discussed in the next section. Replicas should be evenly distributed across the storage enclosures that contain the disks, and they should be evenly distributed across the disks on a per- Using Solaris Volume Manager Software With Sun Cluster 3.

In an ideal environment, this distribution means that any one failure in the storage disk, controller, or storage enclosure does not effect the operation of Solaris Volume Manager software. In a physical configuration that has an even number of storage enclosures, the loss of half of the storage enclosures for example, due to power loss leaves only 50 percent of the diskset replicas available.

While the diskset is owned by a node, this will not create a problem. However, if the diskset is released, on a subsequent take, the replicas will be marked as being stale because the replica quorum of greater than 50 percent will not have been reached. This means that all the data on the diskset will be read-only, and operator intervention will be required. If, at any point, the number of available replicas for either a diskset or the local ones falls below 50 percent, the node will abort itself to maintain data integrity.

To enhance this feature, you can configure a set to have mediators. Mediators are hosts that can import [take] a diskset, and, when required, they provide an additional vote when a quorum vote is required for example, on a diskset import [take]. To assist the replica quorum requirement, mediators also have a quorum requirement that either greater than 50 percent of them are available, or the available mediators are marked as being up to date, this means the mediator is golden and is marked as such.

Mediators, whether they are golden or not, are only used when a diskset is taken. If the mediators are golden, and one of the nodes is rebooted, when it starts up, the mediators on it will get the current state from the node that is still in the cluster. However, if all nodes in the cluster are rebooted when the mediators are golden, on startup, the mediators will not be golden and operator intervention will be required to take ownership of the diskset.

The actual mediator information is held in the rpc. For example, if there are two hosts node1 and node2 and two storage enclosures pack1 and pack2 , diskset replicas are distributed evenly between pack1 and pack2, and node1 owns the diskset.

If pack1 dies, only 50 percent of diskset replicas are available and the mediators on both hosts are marked as golden.

If node1 now dies, node2 can import [take] the diskset because 50 percent of the diskset replicas are available and the mediator on node2 is golden. If mediators were not configured, node2 would not have been able to import [take] the diskset without operator intervention. Mediators do not, however, protect against simultaneous failures. If both pack1 and node1 fail at the same time, the mediator on node2 will not have been marked as golden and there will not be an extra vote for the diskset replica quorum, making operator intervention necessary.

Because the nodes should be on an uninterrupted power supply UPS , which means the mediators should have enough time to be marked as golden, this type of failure is unlikely.

One such scenario is that of a two room cluster. That is, each room has one node and one storage device. If a room fails, then any diskset that was owned by the node in that room will require manual intervention. On the surviving node, the administrator will need to take the set using the metaset or scswitch command and remove the replicas that are marked as errored. When this is done, the diskset needs to be released and retaken so it can gain write access to the configured metadevices. It is possible, by manually moving the replicas about, that the use of manual intervention can be minimized.

Using Soft Partitions as a Basis for File Systems After adding a disk to a diskset, you can modify the partition layout, that is break up the default slice 0 and spread the space between the slices including slice 0.

If slice 7 contains a replica, leave it alone to avoid corrupting the replica on it. Because Solaris Volume Manager software supports soft partitioning, we recommend that you leave slice 0 untouched.

Consider a soft partition as a subdivision of a physical Solaris OE slice or as a subdivision of a mirror, redundant array of independent disks RAID 5, or striped volume. The default number of possible volumes is Note that all soft partitions created on a single diskset are part of one diskset and cannot be independently primaried to different nodes. Soft partitions are composed of a series of extents that are located at arbitrary locations on the underlying media. The locations are automatically determined by the software at initialization time.

It is possible to manually set these locations, but it is not recommended for general day-to-day administration. Locations should be manually set only during recovery scenarios where the metarecover 1M command is insufficient.

In our example, we use the second approach. This cheatsheet contains common commands and information for both Sun Cluster 3. Also both versions of Cluster have a text based GUI tool, so don't be afraid to use this, especially if the task is a simple one. At the bottom of the installation guide I listed the daemons and processing running after a fresh install, now is the time explain what these processes do, I have managed to obtain informtion on most of them but still looking for others.

Disk path monitoring daemon monitors the status of disk paths, so that they can be reported in the output of the cldev status command.

Solaris Cluster

It is automatically restarted if it is stopped. Multi-threaded DPM daemon runs on each node. It is automatically started by an rc script when a node boots. Automatically restarted by rpc.

Solaris Cluster - Wikipedia

In version 3. You can use the scsetup 3. It is also used to run cluster commands remotely like the cluster shutdown command.

Similar articles


Copyright © 2019 medical-site.info.
DMCA |Contact Us