Friday 16 October 2015

ERROR : 

SQL> RECOVER DATABASE UNTIL CANCEL;
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0


SQL> ALTER DATABASE OPEN RESETLOGS;
ALTER DATABASE OPEN RESETLOGS
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0

OR

SQL> startup nomount
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
SQL> RECOVER DATABASE UNTIL CANCEL;
ORA-01507: database not mounted


SQL> alter database mount;

Database altered.

SQL> RECOVER DATABASE UNTIL CANCEL;
ORA-00279: change 1615709 generated at 10/16/2015 01:06:27 needed for thread 1
ORA-00289: suggestion :
D:\APP\ADMIN\FLASH_RECOVERY_AREA\MCCDELO1Q\ARCHIVELOG\2015_10_16\O1_MF_1_58_%U_.
ARC
ORA-00280: change 1615709 for thread 1 is in sequence #58


Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

ORA-00308: cannot open archived log
'D:\APP\ADMIN\FLASH_RECOVERY_AREA\MCCDELO1Q\ARCHIVELOG\2015_10_16\O1_MF_1_58_%U_
.ARC'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.


ORA-10879: error signaled in parallel recovery slave
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'D:\APP\ADMIN\ORADATA\MCCDELO1Q\SYSTEM01.DBF'


--------------------------------------------------------

RESOLUTION :


SQL> select instance_name, status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
mccdelo1q        MOUNTED

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
Database mounted.
SQL> ALTER SYSTEM SET "_allow_resetlogs_corruption"= TRUE SCOPE = SPFILE;

System altered.

SQL> ALTER SYSTEM SET undo_management=MANUAL SCOPE = SPFILE;

System altered.

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup mount
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
Database mounted.
SQL> alter database open resetlogs;

Database altered.

SQL> CREATE UNDO TABLESPACE undo1 datafile 'D:\app\Admin\oradata\MCCDELO1Q\undo1_1.dbf' size 200m autoextend on maxsize unlimited;

Tablespace created.

SQL> ALTER SYSTEM SET undo_tablespace = undo1 SCOPE=spfile;

System altered.

SQL> alter system set undo_management=auto scope=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Total System Global Area 5344731136 bytes
Fixed Size                  2185160 bytes
Variable Size            3120564280 bytes
Database Buffers         2214592512 bytes
Redo Buffers                7389184 bytes
Database mounted.
Database opened.
SQL> alter system switch logfile;

System altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Thank You.
NJ

 

Friday 28 August 2015

SAN Versus(V/S) DAS

SAN (storage area network) is a high-speed network of storage devices that also connects those storage devices with servers. It provides block-level storage that can be accessed by the applications running on any networked servers. SAN storage devices can include tape libraries and disk-based devices, like RAID hardware.

SAN Vs. DAS Performance

Organizations often choose to deploy a storage area network because it offers better flexibility, availability and performance than direct-attached storage (DAS). Because a SAN removes storage from the servers and consolidates it in a place where it can be accessed by any application, it tends to improve storage utilization. Storage utilization improvements often allow organizations to defer purchases of additional storage hardware, which saves money and requires less space in the data center.

Thanks to high-speed connections (usually Fibre Channel), SANs often provide better performance than DAS. Also, because SANs usually offer multiple connections to and from the data center's servers, they also improve availability. In addition, separating the storage from the servers frees up the computing resources on the servers for other tasks not related to storage.

SANs Simplify Management Tasks

SANs are particularly helpful in backup and disaster recovery settings. Within a SAN, data can be transferred from one storage device to another without interacting with a server. This speeds up the backup process and eliminates the need to use server CPU cycles for backup. Also, many SANs utilize Fibre Channel technology or other networking protocols that allow the networks to span longer distances geographically. That makes it more feasible for companies to keep their backup data in remote locations.

Utilizing a SAN can also simplify some management tasks, potentially allowing organizations to hire fewer IT workers or to free up some IT workers for other tasks. It's also possible to boot servers from a SAN, which can reduce the time and hassles involved in replacing a server.

SAN Alternatives

Before the advent of SANs, organizations generally used direct-attached storage (DAS). As the name implies, direct-attached storage is directly attached to the server, residing either on the server or in a standalone storage device that is not part of a separate storage networking environment. Many smaller organizations continue to use DAS today because it offers lower upfront costs than deploying a SAN. However, for larger companies, the benefits of a SAN often outweigh the costs.

Sometimes people confuse the term SAN with the term NAS, which stands for "network-attached storage." The key to distinguishing the two lies in the last term of each acronym: a SAN (storage area network) is an actual network, while NAS (network-attached storage) refers to a storage device, typically in an IP network. While SANs provide block-level storage for servers, a NAS device provides file-level storage for end users. For example, the mail application on your company servers might utilize a SAN to store all the messages, contacts and other data it requires; by contrast, an end user would use a NAS device to save files, such as word processing documents or spreadsheets. Operating systems see a SAN as a disk, while they see a NAS device as a file server.

Making things somewhat more confusing, some storage systems take a hybrid approach, offering some SAN capabilities as well as some NAS capabilities. It's also possible to include NAS devices within a SAN.
SAN Implementation

To set up a simple SAN, you need only three major components: a SAN switch, a storage device and a server. You'll also require cables to connect the various elements together and SAN management software. In most real-world settings, a SAN will include many different switches, storage devices and servers, and it will likely also include routers, bridges and gateways to extend the SAN over large areas and to connect to other parts of the data center network. The SAN's topology will depend on its size and the needs of the organization.

The process of deploying a SAN requires several steps. First, you need to design your SAN, taking into account your current needs and future scalability requirements. Second, you'll need to select a vendor or vendors to provide the hardware and software you'll need, as well as any related services. Next, you'll install the necessary hardware and then install and configure the software for managing your SAN. Deploying a SAN is a complicated process that often requires specialized knowledge and a great deal of planning, particularly if your SAN is very large.

SAN Technology

Several different industry groups have developed standards related to SAN technology. The most prominent is probably the Storage Networking Industry Association (SNIA), which promotes the Storage Management Initiative Specification (SMI-S), as well as related standards. The Fibre Channel Industry Association (FCIA) also promotes standards related to SAN and administers the SANmark Qualified Program.

Fibre Channel is currently the most widely used communication protocol for SANs, but it is by no means the only one. Some SAN networks rely on iSCSI communication, a mapping of SCSI protocol over TCP/IP. SANs can also use ATA over Ethernet (AoE), Fibre Channel over Ethernet (FCoE), ESCON over Fibre Channel, HyperSCSI and some other protocols.

Cheers to SAN!

Thanks
NJ



 
Network Attached Storage (NAS) V/S Storage Area Network (SAN)

Network Attached Storage (NAS)

Definition - What does Network Attached Storage (NAS) mean?

Network attached storage (NAS) is a dedicated server, also referred to as an appliance, used for file storage and sharing. NAS is a hard drive attached to a network, used for storage and accessed through an assigned network address. It acts as a server for file sharing but does not allow other services (like emails or authentication). It allows the addition of more storage space to available networks even when the system is shutdown during maintenance.

NAS is a complete system designed for heavy network systems, which may be processing millions of transactions per minute. NAS provides a widely supported storage system for any organization requiring a reliable network system.
Techopedia explains Network Attached Storage (NAS)

Organizations looking for the best, reliable data storage methods, which can be managed and controlled with their established network systems, often choose network attached storage. NAS allows organizations and home computer networks to store and retrieve data in bulk amounts for an affordable price.

The following three components play an important role in NAS:

    NAS Protocol: NAS severs are fully supported by the network file system and common interface file system. NASs also support different kinds of network protocols including SCP and File Transfer Protocol (FTP). However, over TCP/IP, communication can be done more efficiently and reliably. The initial purpose of NAS design was only file sharing over UNIX across a LAN. NAS also strongly supports HTTP. So users/clients can easily download the stuff directly from the Web if NAS is connected to the Internet.
    NAS Connections: Different mediums are used for establishing connections with NAS servers, including: Ethernet, fiber optics and wireless mediums with 802.11 standards.
    NAS Drives: Any technology can be used for this purpose, but SCSI is used by default. ATA disks, optical discs and magnetic media are also supported by NAS.


Storage Area Network (SAN)

Definition - What does Storage Area Network (SAN) mean?

A storage area network (SAN) is a secure high-speed data transfer network that provides access to consolidated block-level storage. An SAN makes a network of storage devices accessible to multiple servers. SAN devices appear to servers as attached drives, eliminating traditional network bottlenecks.

SANs are sometimes also referred to (albeit redundantly) as SAN storage, SAN network, network SAN, etc.
Techopedia explains Storage Area Network (SAN)

Introduced in the early 2000s, SANs were initially limited to enterprise class computing. Today, high-speed disk costs have gradually dropped and SANs have become a mainstay for greater organizational storage.

SAN implementation simplifies information life cycle management and plays a critical role in delivering a consistent and secure data transfer infrastructure.

SAN solutions are available as two types:

    Fiber Channel (FC): Storage and servers are connected via a high-speed network of interconnected fiber channel switches. This is used for mission-critical applications where uninterrupted data access is required.
    Internet Small Computer System Interface (iSCSI) Protocol: This infrastructure gives the flexibility of a low-cost IP network.

Both provide advantages based on business requirements.

The advantages of SAN include:

    Storage Virtualization: Server capacity is no longer linked to single storage devices, as large and consolidated storage pools are now available for software applications.
    High-Speed Disk Technologies: An example is FC, which offers data retrieval speeds that exceed 5 Gbps. Storage-to-storage data transfer is also available via direct data transmission from the source to the target device with minimal or no server intervention.
    Centralized Backup: Servers view stored data on local disks, rather than multiple disk and server connections. Advanced backup features, such as block level and incremental backups, streamline IT system administrator responsibilities.
    Dynamic Failover Protection: Provides continuous network operation, even if a server fails or goes offline for maintenance, which enables built-in redundancy and automatic traffic rerouting.

SAN is offered by server manufacturers, such as IBM and HP. Server-independent SAN providers include EMC and Network Appliance.

Play with Storages and dont blame the network! :D :D

Thanks
NJ

 

Wednesday 26 August 2015

How to Setup an ArcSDE Trace for API Developers troubleshooting ?



The trace corresponds to client API calls, not the calls made inside the server executive to the database itself.


The use of the trace environment is simple



Create following two environment variables:



          SDETRACELOC=C:\TEMP1\trace



          SDETRACEMODE=vf



Now launch ArcGIS Desktop.  You should disable the trace by removing or altering the SDETRACELOC variable.



And to know what is happening between the SE_stream_execute and the first SE_stream_fetch you'll have  to use the Oracle's trace.


So this is how we setup SDE Trace.


Thanks.
NJ

Saturday 22 August 2015

How to check the REDO logfile estimation ? How to check that what should be the size of REDO logfile?

Use the following command to check the REDO logfile estimation :


set lines 2000 pages 2000
select OPTIMAL_LOGFILE_SIZE from V$INSTANCE_RECOVERY;





Explanation :


The size of the redo log files can influence performance, because the behavior of the database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.

Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors, including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle Database automatically tries to checkpoint as frequently as necessary. Under this condition, the size of the log files should be large enough to avoid additional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of Oracle Enterprise Manager.

It may not always be possible to provide a specific size recommendation for redo log files, but redo log files in the range of 100 MB to a few gigabytes are considered reasonable. Size online redo log files according to the amount of redo your system generates. A rough guide is to switch log files at most once every 20 minutes.

Happy Blogging :)


Wednesday 12 August 2015

Slow ArcSDE Performance Troubleshooting

Why is my ArcSDE geodatabase running so slow?  Is it because I have too much data?  Probably not.  That’s why you bought the big enterprise DBMS, isn’t it?  Whether its a direct connection or an application server connection, sluggish SDE performance will happen from time to time.

Update statistics and rebuild indexes:

A good thing to get into is to analyze any new feature class or table brought into the geodatabase.  However, see this link to rebuild indexes and update statistics for the entire GDB……and add it to your GDB maintenance routine:

FAQ:  How can ArcSDE performance be improved?


Compress:

Compressing (not reck/post) moves edits stored in the Delta tables into the base table as well as remove any unreferenced states. Even if you’re a small shop with few editiors, letting these tables grow unmanaged will wreak havoc on performance over time.  Also, you don’t necessarily need to disconnect users, delete versions or unregister replicas to benefit from a compress.


Compressing an ArcSDE geodatabase helps maintain database performance by removing unused data.
Specifically it does two things:
  • First, it removes unreferenced dates, and their associated delta table rows.
  • Second, it moves entries in the delta tables that are common to all versions into the base tables, thus reducing the amount of data that the database searches through when executing queries. In effect, a compress will improve query performance and system response time by reducing the depth and complexity of the state tree.
When a large volume of uncompressed changes have accumulated in an ArcSDE geodatabase, a compress operation can take hours or even days. This is another very common cause of poor performance. To avoid this, you should compress on a regular basis (daily, weekly, and after periods of high editing activity). Users can stay connected to the geodatabase during a compress, but we suggest that all users be disconnected for the compress operation to be fully effective.
Remember to update statistics before and after a compress, and note the one exception mentioned earlier. The compress command is available in ArcCatalog. You add the command from the Customize dialog box, and you must be connected as the SDE user to execute it, or you could execute a compress with SDE commands.

The geodatabase compress operation

HowTo:  Compress a versioned database to state 0

 Five Best Practices for Maintaining an ArcSDE Geodatabase

 

Direct Connections(2 Tier) V/S the ArcSDE Service Connections(3 Tier):

Old habits are hard to break and the 3-tier application (ArcSDE) service is certainly one of them. When I know I have to use SDE commands for troubleshooting, I almost always set up a service so I don’t have to type the connection string repeatedly.

With that being said, for everyday use, the direct connection really is the way to go.  Consider this a choice between geodatabase transactions processed over the network or those same geodatabase transactions processed on the client machine.  Most computers today have more than enough processing power to handle this and other than the text in  the connection properties, ArcSDE functionality remains unchanged.

Check out this great link for more details:


 Why should I be making direct connections to an ArcSDE geodatabase?



 

 
Here are a couple of direct connect syntax examples to get you started:
    Sql Server – sde:sqlserver:<server_name\instance_name>   
    Oracle with ArcSDE 10 – sde:oracle11g*:<net service name>   * or oracle10g   
    Oracle pre v.10 – sde:oracle10g:\;LOCAL=<Oracle SID>   
    PostgreSQL – sde:postgresql:<server_name>

Thanks.
NJ

$SDEHOME/etc – Understanding SDEHOME and its components

Whether it’s $SDEHOME or %SDEHOME% the “etc” folder contains a wealth of information that, when encountering an error,  may point out something silly and easy to fix or at least lead you in the right direction.

dbinit.sde

The dbinit.sde file is read each time the ArcSDE instance starts.  This file can be used to set environment variables for error logging, location paths, user names, passwords and more.  Here are two environment variables to enable a client intercept log.

set SDEINTERCEPT=TRUE

set SDEINTERCEPTLOC= “C:\Temp\client_intercept”

Environment variables

dbtune.sde

This file contains the configuration keywords and their specified values.  Typically, the default parameters are acceptable but it’s possible to create new keywords or change the default values. This is a topic unto itself so I’ll leave it be for now.   Have a look at these links for more details:

What is the DBTUNE table?

What are DBTUNE configuration keywords and parameters?

giomgr.defs

This file updates the sde.server_config table in the database.  Most of the initialization parameters in this table should not need to be altered from their default settings, except possibly the TEMP location on Windows installations and MINBUFFSIZE and MAXBUFFSIZE, which can be adjusted to improve data loading performance.

The TCPKEEPALIVE parameter is the value I seem to change the most.  Setting this to TRUE can help avoid orphaned gsvr processes which can hog network resources and prevent additional connections.  Here’s an example to change the TCPKEEPALIVE parameter to true with the sdeconifg command:

C:\> sdeconfig -o alter -v TCPKEEPALIVE=TRUE -i <service> -D <database_name>


ArcSDE Command Reference

services.sde

This file stores the name and TCP/IP port number for the ArcSDE service.  Unix machines will always pull information from this file to connect.  Windows machines will only use this file when starting a service with the sdemon command.  The Windows services file can be found in the %windir%\System32\drivers\etc directory.
#
# ESRI ArcSDE Remote Protocol
#
#esri_sde 5151/tcp

sde_<service_name>.log,  goimgr_<service_name>.log

The giomgr listens for requests to connect to the database.  When the request is received, the giomgr spawns a gsrvr process for that client.  When a service fails to start or if the giomgr fails to cough up a gsrvr, a brief description of the problem and an error code will appear in this file.  Have a look at this link describing ArcSDE error return codes.

Return codes

sdedc_<database_name>.log

This contains connection information and info on specific commands and reports errors in connection initialization.  Much like the service log, this will report back what’s happening during a direct connection.

    Tip:  If you have an etc directory in your ArcGIS installation location, the file is written there. If you have neither an SDEHOME variable or etc directory, the log files are written to the temp directory.


Thanks.
NJ