Quantcast
Channel: TSMAdmin
Viewing all 85 articles
Browse latest View live

Do Large Corporations Need Tape?

$
0
0
I am dealing with a situation where I have to gone from a tapeless TSM environment to the standard TSM tape model and I have to wonder why you would use tape when you have multiple data centers? If you have multiple data centers why not backup to disk and replicate the data on disk to a disk solution at the alternate DC? I did this with Data Domains and it made life so much easier. Multiple DR tests showed it was efficient and successful, of course this also utilized deduplication so disk usage and costs didn't get out of hand.  So I ask why is any large corporation still using tape?

Upgrading A TSM 5.5 Library Manager to 6.x

$
0
0
I just helped (sort of) perform an upgrade of two TSM library managers from TSM 5.5 to TSM 6.2. First off I'd like to say that the process involved was really not worth the time it took. Our library manager had a 1GB DB and contained no client data. When the library controller contains no client data you can easily move from 5.x to 6.x without all the headaches of a DB upgrade through the extract and insert process (which took 1 hour to complete once we started the insert). Here are the basic steps to easily upgrade a TSM library manager:
  • Backup the TSM library manager DB
  • Backup Volhist and Devconfig
  • Copy all define statements from devconfig into a TSM macro
  • Uninstall TSM 5.5
  • Install TSM 6.x
  • Follow the steps to create a new TSM 6.x server
  • Start the TSM 6.x server
  • Run the macro to redefine all the servers, devclasses, libraries, drives, and paths
  • Check-in the tapes to the library
  • Run audit library from each of the library clients
It might seem like a lot, but once you've got the TSM 6.x server up and running, defining the other items is easy and will take a lot less time than running the upgrade process.

NOTE: This only works if you do not perform ANY backups (Client or NAS) to the library manager.

Tivoli Storage Manager Next Administrative Interface / Beta Programme

$
0
0
It'll be called: Operations Center (a bit sounds like admincenter ;-) I hope they didn't only change the design!

Data Domain vs. Protectier

$
0
0
Where I am currently employed we are looking to replace our 3592 based library with a deduplication solution. Currently the higher ups are leaning towards IBM ProtecTIER without having thoroughly investigated any other solutions. Having previously used Data Domain solutions at my previous employ I was somewhat concerned that the ProtecTIER solution would be a bad fit for our environment. I have had some run ins with people who have used IBM's ProtecTIER solution and when compared to those who have used Data Domain (including myself) you immediately see the difference in how they talk about the two products. So I was hoping to find a good write-up showing in-depth details comparing the two solutions and it took a blogger like me to provide a great comparison.  If you would like a good overview of how Data Domain and ProtecTIER stack up against one another in technology and performance check out the following link. It's very informative and solidifies why I would prefer using Data Domain.

Deduplication: Data Domain Vs. ProtecTIER Performance

One item that was not covered was the NFS capabilities of both. While I used VTL functionality with Data Domain, I was a HUGE NFS proponent. You can save a lot of money over a TSM TDP + LAN-Free solution using NFS with 10Gb Ethernet for your DB backups  (since IBM's licensing costs are still questionable). When I was first exploring ProtecTIER they did not yet have NFS capabilities, so I'd like to see a NFS performance comparison between the two products.

Cleaning tape cycles CLI

$
0
0

#IBM 3494#

mtlib -l -q L | grep 3592

#IBM 3584, A.K.A. IBM TS3500#

/opt/java6/bin/java -jar TS3500CLI.jar -a -viewCleaningCartridges -u -p | awk -F',' '{total=total+$9;}END{print total}'

Insert it into your own morning TSM report script! ;-)

TSM 7

$
0
0
I recently attended a technical briefing from IBM of various storage related topics which included TSM. While they did have an NDA I can say that some of the items we discussed showed promise. I'll be able to discuss more after IBM Pulse this month, but what I can say is that the new Admin Center is pretty slick. It has some nice features and will finally make up for the folly that was the ISC. IBM stressed that they are listening to users and taking their requests and suggestions to try and develop a tool everyone will find useful. That was surprising news seeing as how the majority of people complained about the ISC and it took 7+ years to finally get a replacement. I will say this in defense of the TSM developers, a lot of the ISC push came from above and they were somewhat forced into that fiasco. TSM 7 DB will scale larger and handle more objects and they are really ramping up the capabilities of the client deployment module. More info to come in the next couple weeks.

One item that did come up was the issue of Export and Backup Set tapes being unencrypted from TSM due to the key issue. What I suggested was that they allow TSM servers to backup each others keys and also utilize them so Exports and Backup-Sets could be encrypted, but still shared between TSM servers. Hope they find some way to add that capability.

We did have a Protectier review and it has a lot of promise. I know I have been a Data Domain fanboy for some time. While I didn't see anything that integrated Protectier DeDupe with TSM directly it did show some nice growth capabilities. I'm excited to see how well it works, but I'm fighting a study that shows tape still is the more cost effective backup solution.

I'll post more once PULSE is complete (mid-June) so stay tuned!

IBM P7 Strange Behaviour

$
0
0
We have a P7 frame that has 4 LPARs that are used as TSM storage agents from which snapshots of our SAP DB's are mounted for backup. They have always had great performance until one LPAR had a bad HBA that phoned home and was replaced. After it was replaced performance for backups dramatically decreased from 800MB/s to 150MB/s and overall performance of the server would drastically drop. When the DB requiring backup is over 25TB that is a huge hit, and we could not find the root cause.  At first IBM said it was our Hitachi disk that was the problem. We eliminated that right away, so we then replaced the new HBA, checked our fiber, and then checked the GBIC and nothing seemed to fix the situation. During the first week I asked the IBM service technician if we could possibly have a bad drawer or slot and he emphatically said "No! If you did you would have errors all over the place." So we checked firmware, we moved cards within the frame (again), we double checked the fiber, now we were going into the third week. So I kept asking if something could be wrong with the drawer/slots and I kept getting the same answer. The reason I suggested it was due to previous experience. I have seen hardware go bad without totally going "out". So after exhausting everything avenue other than the replacing the slots, IBM finally replaced the slots. Viola! Backup speeds went back to normal and system degradation during the backup disappeared.  So the slots/drawer was the issue. No errors relating to a slot/drawer hardware issue occurred but something caused the slots to degrade performance.  It took almost a month to resolve the issue, I wouldn't say that IBM support was very thorough and at times tried to push off the problem to other vendors (i.e. Hitachi). I can only suggest in the future you trust your instincts and push the CE's to follow down every avenue. My headache is over, but now the RCA begins.

TSM Command Processing Tip

$
0
0
I am constantly having to run a large list of commands and sometimes just don't want to deal with running them through a shell script. So whats the best way to run a list of commands without having to deal with TSM prompting for a YES/NO. I can using a batch command with the -NOPROMPT option from a admin command-line, but sometimes thats more work than I want to deal with. There's got to be a better way. Well the simple answer is to define the TSM server to itself and use it in the command when you run it.  Here's an example....I have to delete empty volumes from storage pools rather than wait for the 1 day delay.

select  'ustsm07:del vol', cast((volume_name)as char(8)) as VOLNAME, from volumes where pct_utilized=0 and devclass_name <> 'DISK' 

RESULTS:

Unnamed[1]           VOLNAME   
----------------     --------- 
ustsm07:del vol      K00525    
ustsm07:del vol      K00526    
ustsm07:del vol      J00789    
ustsm07:del vol      J00197    
ustsm07:del vol      J00303    
ustsm07:del vol      J01172    
ustsm07:del vol      J01233    
ustsm07:del vol      J00850    
ustsm07:del vol      J00861    
ustsm07:del vol      K00018    
ustsm07:del vol      J01613    
ustsm07:del vol      J01624    
ustsm07:del vol      J01671    
ustsm07:del vol      J01687    
ustsm07:del vol      K00116    
ustsm07:del vol      K00130    
ustsm07:del vol      K00340    
ustsm07:del vol      K00348 

tsm: USTSM07>USTSM07:del vol       K00525
ANR1699I Resolved USTSM07 to 1 server(s) - issuing command DEL VOL K00525 against server(s).
ANR1687I Output for command 'DEL VOL K00525' issued against server USTSM07 follows:
ANR2208I Volume K00525 deleted from storage pool TAPE_A.
ANR1688I Output for command 'DEL VOL K00525' issued against server USTSM07 completed.
ANR1694I Server USTSM07 processed command 'DEL VOL K00525' and completed successfully.
ANR1697I Command 'DEL VOL K00525  processed by 1 server(s):  1 successful, 0 with warnings, and 0 with errors.
   

So I copy the data and paste it into my command line and because I am using server routing (even to the same server I am on) TSM does not prompt for confirmation. So make sure you have defined your TSM servers to themselves so you can take advantage of this simple feature.  Also note that TSM wont delete a tape with data, so I leave the "DISCARD=YES" option off so only EMPTY tapes are deleted.



Archive Report

$
0
0
Where I work we have a process that bi-monthly generates a mksysb then archives it to TSM. Recently an attempt to use an archived mksysb found that sometimes the mksysb process does not create a valid file, but it is still archived to TSM. So the other AIX admins asked me to generate a report that would show the amount of data that was archived and on what date it occurred. Now I would have told them it was impossible if they had asked for data from the backup table, but our archive table is not as large as the backups so I gave it a go.

First problem was determining the best table(s) to use. I could use the summary table, but it doesn't tell me what schedule ran and some of these UNIX servers do have archive schedules other than the mksysb process. The idea I came up with was to query the contents table and join it with the archive table using the object_id field.  Here's an example of the command:

select a.node_name, a.filespace_name, a.object_id, cast((b.file_size/1048576)as integer(9,2))AS SIZE_MB , cast((a.ARCHIVE_DATE)as date) as ARCHIVE from archives a, contents b where a.node_name=b.node_name and a.filespace_name='/mksysb_apitsm' and a.filespace_name=b.filespace_name and a.object_id=b.object_id and a.node_name like 'USA%'

This select takes at least 20 hours to run across 6 TSM servers. I guess that I should be happy it returns at all, but TSM is DB2! It should be a lot faster, so I am wondering if I could clean up the script or add something that would make the index the data faster??? I am considering dropping the "like" and just matching node_name between the two tables. Would putting node_name matching first then matching object_id be faster? Would I be better off running it straight out of DB2? Suggestions appreciated.

Full TSMExplorer for TSM version 5 is free now

New TSM Admin In The House!

$
0
0
Just thought I should let everyone know that my wife and I had a son December 3rd. The holidays and lead up to his being born have kept me busy. My son makes 8 kids total and I'm a very busy man. So don't worry I shall return but the last 9 months have been a blur.

Poor Performance

$
0
0
Currently I work in an environment where we have a specific TSM instance for a large SAP DB (99TB currently). We just upgraded the drives in the tape library (yes we use tape! I know...I know....) from MagStar 3592 TS1130 (E06) drives to TS1140 (E07) drives. The upgrade was pushed in hopes of a jump in write/backup performance, but I was skeptical. TSM adds so much overhead you cannot use the RAW tape read/write numbers from any manufacturer. Typically IBM is somewhat reasonable with their numbers, but in this case I have seen NO performance increase what-so-ever.  Here is a query of the processes for storage pool backup.

UPDATE (04/04/2014): Let me give you some more specs, we have the 99TB DB split between 4 TSM Storage Agents each having 4 8Gb HBA's. Each storage agent runs 4 sessions (allocates 4 drives) for their backup process. So all 4 storage agents account for 16 simultaneous sessions and it still takes over 24 hours to perform the 99TB backup. The backups are averaging around 70-78MB/sec. Is this a TSM overhead issue or do I have a tuning issue with the TDP and TSM? I'm getting less than 50% of the throughput I should see.

Here's the command that is run to execute the DB backup:

ksh -c export DB2NODE=7 ; db2 "backup db DB8   LOAD /usr/tivoli/tsm/tdp_r3/db264/libtdpdb264.a OPEN 4 SESSIONS OPTIONS /db2/DB8/dbs/tsm_config/vendor.env.7 WITH 14 BUFFERS BUFFER 1024 PARALLELISM 8 WITHOUT PROMPTING" ; echo BACKUP_RC=$?

PROCESS_NUM: 2667
    PROCESS: Backup Storage Pool
 START_TIME: 03-27 23:21:54
   DURATION: 00 23:20:13
      BYTES: 6.0TB
 AVG_THRPUT: 75.87 MB/s

PROCESS_NUM: 2668
    PROCESS: Backup Storage Pool
 START_TIME: 03-27 23:21:55
   DURATION: 00 23:20:12
      BYTES: 6.2TB
 AVG_THRPUT: 78.48 MB/s

PROCESS_NUM: 2669
    PROCESS: Backup Storage Pool
 START_TIME: 03-27 23:21:55
   DURATION: 00 23:20:12
      BYTES: 6.2TB
 AVG_THRPUT: 77.99 MB/s

PROCESS_NUM: 2670
    PROCESS: Backup Storage Pool
 START_TIME: 03-27 23:21:55
   DURATION: 00 23:20:12
      BYTES: 6.4TB
 AVG_THRPUT: 80.13 MB/s

I average anywhere from 75 to 80 MB/sec.  Here is the Magstar performance chart. I am using JB media, not JC so I do take a little hit in performance for that.










So with JB media I could get as high as 200MB/sec but I am not even 50% of that number.  Is there any specific tuning parameter I should look at that could be hindering the performance? 

FYI - The backup of the 99TB DB runs LAN-Free using 16 tape drives over 26 hrs.

IBM Tivoli Storage Manager is NOT affected by the OpenSSL Heartbleed vulnerability

Sony Develops 185TB Tape

Poor Performance Followup

$
0
0
As a follow-up to the previous poor performance post I thought I'd post what the outcome was. As it turns out we checked performance tuning settings in TSM and AIX and no performance increase was seen. We asked the DB2 admins to review any of their settings and they could not find any tunables that had not already been implemented. We sent in servermon.pl output and although they saw performance that was sub-par, they couldn't designate what was causing it. There are no server/adapter/switch/disk/tape errors so nothing emerged as the culprit for our poor throughput performance.

So we reviewed the backup time of each TSM storage agent server used to backup this 101 TB SAP database. At the time the storage agents that perform the backup consisted of 5 LPARS, 4 of those in a single frame each with their own assigned I/O drawer. The 5th was in a separate 740 frame with its own I/O drawer. The 5th storage agent was completing the backup in a fraction of the time of the other 4 so we concluded we must be overloading the CEC on the 740. We moved one of the four storage agents out of the frame to a secondary frame and the results were awesome. See below:


You'll notice that the backup time didn't change with the update of the tape drives from E06 to E07. Hardware layout matters more than the performance of the tape drives. When a vendor tells you just updating the hardware to newer iterations will increase performance take it will a grain of salt. In our case we did testing of the new tape drives and no performance gains were seen but the go ahead was given to upgrade to the newer hardware and as you'll see we didn't gain anything until we reworked the environment. Our task now is to identify how to increase TSM internal job performance (i.e. migration and storage pool backup) which has not seen significant performance gains from the tape upgrades.


TKLM and TSM Encryption

$
0
0
When it comes to encryption and TSM you find varying responses from admins. Some use the TSM server as the key manager, others implement a library based key manager, and others use a third party software product. In the past I used TSMs internal encryption key management option and while it is a set-it and forget it process it has some limitations when it comes to Exports and DB Backups.  That is where third party software like TKLM can be beneficial. I have recently implemented TKLM and after some hiccups along the way am still undecided on whether I like it.  If you use TKLM let me know your experience and if there are any issues of which I should be aware.  I'll post my hiccups next week as they will take some time to discuss.

TKLM - Things To Know Part 1

$
0
0

DB2 Password and TKLM Data Source Out of Sync

On systems such as Linux or AIX, you might need to change the password for the DB2® Administrator user ID. The login password for the DB2 Administrator user ID and the DB2 password for the user ID must be the same.
The Tivoli Key Lifecycle Manager Installation program installs DB2 and prompts the installing person for a password for the user named tklmdb2. Additionally, the DB2 application creates an operating system user entry named tklmdb2. For example, the password for this user might expire, requiring you to resynchronize the password for both user IDs.
Typically you can identify if the DB2 ID password is no longer in sync with the data source password when you see this error when accessing TKLM through the GUI
 
Before you can change the password of the DB2 Administrator user ID, you must change the password for the system user entry. To resolve the password sync issue follow these steps:
Note: The original IBM document is located here.

1.     Log on to Tivoli Key Lifecycle Manager server as root.
2.     Change user to the tklmdb2 system user entry. Type:
su <gc>tklmdb
3.     Change the password. Type:
passwd
Specify the new password.
4.     Exit back to root.
exit
5.     In the TIP_HOME/bin directory, use the wsadmin interface that the WebSphere® Application Server provides to specify the Jython syntax.
./wsadmin.sh -username TIPAdmin -password mypwd -lang jython
6.     Change the password for the WebSphere Application Server data source:
a.     The following command lists the JAASAuthData entries:
wsadmin>print AdminConfig.list('JAASAuthData')
The result might like this example:
(cells/TIPCell|security.xml#JAASAuthData_1396539704930)
(cells/TIPCell|security.xml#JAASAuthData_1396539705604)
b.    Type the AdminConfig.showall command for each entry, to locate the alias tklm_db. For example, type on one line:
print AdminConfig.showall ('(cells/TIPCell|security.xml#JAASAuthData_1396539704930)')
The result is like this example:
[alias tklmdb]
[description "TKLM database user J2C authentication alias"]
[password *****]
[userId ustklmdb]

And also type on one line:
print AdminConfig.showall ('(cells/TIPCell|security.xml#JAASAuthData_1396539705604)')
The result is like this example:
[alias tklm_db]
[description "TKLM database user j2c authentication alias"]
[password *****]
[userId ustklmdb]

c.     Change the password for the tklm_db alias that has the identifier JAASAuthData_1396539705604:
print AdminConfig.modify('JAASAuthData_list_entry', '[[password passw0rdc]]'
For example, type on one line:
print AdminConfig.modify
('(cells/TIPCell|security.xml#JAASAuthData_1396539705604)',
'[[password <password>]]')

d.    Change the password for the tklmdb alias that has the identifier JAASAuthData_1396539704930:
print AdminConfig.modify('JAASAuthData_list_entry', '[[password passw0rdc]]'
For example, type on one line:
print AdminConfig.modify
('(cells/TIPCell|security.xml#JAASAuthData_1396539704930)',
'[[password <password>]]')

e.     Save the changes:
print AdminConfig.save()
f.     Exit back to root.
exit
g.    In the TIP_HOME/bin directory, stop the Tivoli Integrated Portal application. For example, as TIPAdmin, type on one line:
stopServer.sh server1 -username tipadmin -password passw0rd
The result is like this example:

ADMU0116I: Tool information is being logged in file
//opt/IBM/tivoli/tiptklmV2/profiles/TIPProfile/logs/server1/stopServer.log
ADMU0128I: Starting tool with the TIPProfile profile
ADMU3100I: Reading configuration for server: server1
ADMU3201I: Server stop request issued. Waiting for stop status.
ADMU4000I: Server server1 stop completed.

h.     Start the Tivoli Integrated Portal application. As the Tivoli Integrated Portal administrator, type on one line:

 startServer.sh server1

i.      In the TIP_HOME/bin directory, use the wsadmin interface that the WebSphere Application Server provides to specify the Jython syntax.

./wsadmin.sh -username tipadmin -password mypwd -lang jython

j.      Verify that you can connect to the database using the WebSphere Application Server data source.

i.       First, query for a list of data sources. Type:

print AdminConfig.list('DataSource')

The result might be like this example:

"TKLM DataSource(cells/TIPCell/nodes/TIPNode/servers/server1|resources.xml#DataSource_1396539707355)"
"TKLM scheduler XA Datasource(cells/TIPCell/nodes/TIPNode/servers/server1|resources.xml#DataSource_1396539709814)"
"Tivoli Common Reporting Data Source(cells/TIPCell|resources.xml#DataSource_1396539473259)"
DefaultEJBTimerDataSource(cells/TIPCell/nodes/TIPNode/servers/server1|resources.xml#DataSource_1000001)
ttssdb(cells/TIPCell|resources.xml#DataSource_1396539429750)

ii.      Type:
print AdminControl.testConnection('TKLM DataSource(cells....)')
For example, type on one line:
print AdminControl.testConnection (‘TKLM DataSource(cells/TIPCell/nodes/TIPNode/servers/server1|resources.xml#DataSource_1396539707355)')
iii.     Test the connection on the remaining data source. For example, type:
print AdminControl.testConnection (‘TKLM scheduler XA Datasource(cells/TIPCell/nodes/TIPNode/servers/server1|resources.xml#DataSource_1396539709814)')
iv.    In both cases, you receive a message that the connection to the data source was successful. For example:

WASX7217I: Connection to provided data source was successful.

TKLM - Things To Know Part 2

$
0
0

Resolving TKLM Memory Issue

TKLM has a known issue with the Java memory heap size. This memory issue results in TKLM becoming slow to respond or stops issuing keys. You can search for an Out Of Memory condition by reviewing the TKLM /tklm/tip/profiles/TIPProfile/logs/server1/SystemOut.logand looking for the following error:

 java.lang.OutOfMemoryError

If this error is present the short term solution is to restart the primary and replica TKLM instances to resolve the out of memory state. The long term solution is to change the TKLM memory settings in two files used to determine the processes memory allotment.
·         Restart the TKLM primary and replica which will flush the memory in use and allow TKLM to issue keys as before. 

Note:This is a short term solution and does not resolve the problem as it will occur again after a period of time.
·         The permanent solution is to reduce the TKLM audit level to low and change the wsadmin process’s Java memory heap size. This needs to be done in two locations and can be done by following the steps provided:

1.     Backup the /tklm filesystem before you edit the files.

sudo dsmc /tklm

2.     Reduce the TKLM audit level to low by using the TKLM web GUI and navigating to
1)     TKLM > Configuration > Audit
2)     Select Low and click OK
Confirm by looking into this file: /tklm/tip/products/tklm/config/TKLMgrConfig.properties
 Verify that Audit.event.type and Audit.event.outcome variables state the following:

Audit.event.types = runtime, authorization, authorization_terminate, resource_management, key_management
 Audit.event.outcome = failure

3.     Edit wsadmin script and server.xml manually.
1)     You will find the two files that require editing, server.xml and wsadmin.sh, in the following directories:
/tklm/tip/profiles/TIPProfile/config/cells/TIPCell/nodes/TIPNode/servers/server1/server.xml
/tklm/tip/bin/wsadmin.sh

4.      modify the wsadmin -Xmx setting.
Example:
1) Locate and modify the below entry
default value:
PERF_JVM_OPTION(S)="-Xms256m -Xmx256m -Xj9 -Xquickstart"

set max value:
PERF_JVM_OPTION(S)="-Xms256m -Xmx1280m -Xj9 -Xquickstart"
Note: The maximum heap size for wsadmin is 1280Mb

2) Save the changes

5.     Now modify the server.xml file by setting the genericJvmArguments variable to “-Xmx2048m”
1)     Locate and modify the below entry
genericJvmArguments="-Xmx2048m"
2)     Save the changes

6.     As root stop TKLM
1)    /tklm/tip/bin/stopServer.sh server1
7.     As root start TKLM
1)    /tklm/tip/bin/startServer.sh server1

TKLM - Things To Know Part 3

$
0
0

Identifying and Releasing Empty Volumes Back To Scratch

Due to the TKLM server being unable to issue keys TSM will assign tapes to a storage pool and then fail to write to the tape. To release the tapes back to scratch, after performing the resync you should check the TSM servers to see if any volumes are assigned to a storage pool but contain no data.  Use the following select statement to list the volumes with that 0 percent utilized. You will notice it creates a command within the results allowing you to quickly release the tapes with a simple cut and paste in the TSM admin command line.
select varchar(a.server_name,10) ||':'|| 'del vol', varchar(b.volume_name,8) as volname, b.pct_utilized, varchar(b.stgpool_name,15) as stgpool_name from status a, volumes b where b.pct_utilized=0 and b.devclass_name<>'DISK' order by b.stgpool_name, b.pct_utilized

You should see the following if TSM shows tape(s) with 0% utilized:

Unnamed[1]              VOLNAME        PCT_UTILIZED     STGPOOL_NAME
-------------------     ---------     -------------     ----------------
TSM01:del vol           J02579                  0.0     COPYTAPE
TSM01:del vol           J00243                  0.0     DBTAPE
TSM01:del vol           K00700                  0.0     DBTAPE_B_NC
TSM01:del vol           J00039                  0.0     LOGTAPE
TSM01:del vol           H70341                  0.0     LOGTAPE
TSM01:del vol           J00186                  0.0     LOGTAPE
TSM01:del vol           J00115                  0.0     LOGTAPE
TSM01:del vol           J00528                  0.0     LOGTAPE
TSM01:del vol           J01224                  0.0     LOGTAPE
TSM01:del vol           J01255                  0.0     LOGTAPE
You can use a portion of the results to execute against the server to release the tapes. If you’d rather not see the PCT_UTILIZED or STGPOOL_NAME then remove them from the script:
select varchar(a.server_name,10) ||':'|| 'del vol', varchar(b.volume_name,8) as volname from status a, volumes b where b.pct_utilized=0 and b.devclass_name<>'DISK' order by b.stgpool_name, b.pct_utilized

Unnamed[1]              VOLNAME
-------------------     ---------
TSM01:del vol           J02579
TSM01:del vol           J00243
TSM01:del vol           K00700
TSM01:del vol           H70341
TSM01:del vol           J00039
TSM01:del vol           J00115
TSM01:del vol           J00186
TSM01:del vol           J00528
TSM01:del vol           J01173
TSM01:del vol           J01224
TSM01:del vol           J01255

Run this select against all the TSM servers that have libraries that use the TKLM server and run the results through the TSM admin command line to release the tapes back to scratch. You will notice we are NOT using the DISCARD=YES flag for a reason. Without the discard flag TSM will not delete a volume that has some data but the amount is so low it still reports as 0% utilized.

Note: When deleting volumes DO NOT USE THE DISCARDFLAG! This will keep you from deleting a valid storage pool volume.

SQL: CASE and CONCAT

$
0
0
SO I was trying to build a better report for TSM Client levels replacing the crappy windows OS level with the correct version using CASE but was worried that case with two fields being concatenated would work. Well it does and quite well. The only issue was that if the platform_name is longer than the varchar setting then you will receive a warning error at the end of the select (the select runs successfully but will truncate any results for platform_name which is easily fixed).

select case -
  when varchar(platform_name,10) || '' || cast(client_os_level as char(14)) ='WinNT 5.00' then 'WinNT 2000' -
  when varchar(platform_name,10) || '' || cast(client_os_level as char(14)) ='WinNT 5.02' then 'WinNT 2003' -
  when varchar(platform_name,10) || '' || cast(client_os_level as char(14)) ='WinNT 6.00' then 'WinNT 2008' -
  when varchar(platform_name,10) || '' || cast(client_os_level as char(14)) ='WinNT 6.01' then 'WinNT 2008 R2' -
  when varchar(platform_name,10) || '' || cast(client_os_level as char(14)) ='WinNT 6.02' then 'WinNT 2012' -
  when varchar(platform_name,10) || '' || cast(client_os_level as char(14)) ='WinNT 6.03' then 'WinNT 2012 R2' -
  else varchar(platform_name,10) || '' || cast(client_os_level as char(14)) -
end -
  AS platform_name, -
cast(client_version as char(1)) || '.' || cast(client_release as char(1)) || '.' || cast(client_level as char(1)) || '.' || cast(client_sublevel as char(1)) as TSM_Version, count(distinct tcp_name) AS COUNT from nodes where LASTACC_TIME>(CURRENT_TIMESTAMP - 70 DAYS) and node_name like '%SU%' group by platform_name, client_os_level, client_version, client_release, client_level, client_sublevel


The results were exactly what I wanted.

PLATFORM_NAME          TSM_VERSION           COUNT
------------------     -----------     -----------
SUN SOLARIS 5.9        5.2.2.0                   4
WinNT 2000             5.3.0.0                   1
WinNT 2000             5.3.6.0                   1
WinNT 2003             5.3.0.0                   4
WinNT 2003             5.3.2.0                   6
WinNT 2003             5.3.4.0                   6
WinNT 2003             5.4.0.2                   3
WinNT 2003             5.4.1.4                   2
WinNT 2003             5.4.2.0                   2
WinNT 2003             5.4.3.0                   2
WinNT 2003             5.5.0.4                   8
WinNT 2003             5.5.1.0                   1
WinNT 2003             5.5.2.0                   1
WinNT 2003             5.5.3.0                   2
WinNT 2003             6.1.3.0                   1
WinNT 2008             5.5.0.4                   1
WinNT 2008 R2          6.1.4.0                   3
WinNT 2008 R2          6.2.4.0                   2
WinNT 2008 R2          6.3.0.0                   2
WinNT 2012             6.4.1.0                   1
Viewing all 85 articles
Browse latest View live