Quantcast
Channel: TSMAdmin
Viewing all 85 articles
Browse latest View live

Poll Results

$
0
0
The TSM Server usage poll closed Jan 1st and the results were interesting. See for yourself, but I thought the usage of 6.3 and 7.1 over 6.4 was interesting.  I guess it's not worth going to 6.4 seeing as 7.1 is out???


TSM Rumor Mill

$
0
0
So I was talking with an IBM source about IBM's recent layoffs and she stated that many groups were being restructured and TSM was affected. According to my source she said that TSM is possibly going to have a name change. This is no shock for those who remember when TSM was called ADSM under the ADSTAR (ADvanced STorage And Retrieval) division. A partial name drop was something along the lines of Specter blah blah blah!! So take this rumor with some reservations. IBM is going through a major upheaval as they decide what their service path with be in the future, so it's anybodies guess where TSM will end up.

Rumor Mill Update!

$
0
0
So my source at IBM was right except for the name, but she was close, in that IBM is going to be restructuring its storage software group under the Spectrum Storage moniker (Specter was close but Spectrum is a better title). ZDNet states that IBM will be investing $1 Billion in software defined storage which also means TSM is no longer under the Tivoli division....(Please, Please, Please let this be true!) In my opinion Tivoli was a horrible brand name and IBM should have never put ADSM under its umbrella.

Hopefully we will see some huge changes to TSM to make it more competitive. Some cloud connectivity would be a good start as more companies are implementing hybrid data-centers. I'd also like to see the new administration and reporting tool become a little more flexible with user defined reports, but we'll have to wait and see where this restructuring takes TSM or should we now call it Spectrum Storage Manager?

Data Domain Compression

$
0
0
I currently manage 7 Data Domain's (890's and 670's) and none of them are seeing compression above 5x. We obviously need to do some cleanup to get rid of data that is a bad candidate for dedupe, but the question is "where to start?" Have any of you successfully increased you dedupe compression through cleanup? If so what steps did you take?

A Better Q MOUNT?

$
0
0
I was playing around with my QDRV script which I feel gives a better showing than Q DRIVE and realized that I could technically produce a better Q MOUNT display than the built in TSM command provides. FYI - Unlike the Q MOUNT command my QMNT script shows the device definition for the current DRIVE_OWNER, so a storage agent or library client.  I have provided both my scripts for you, let me know what you think. Suggestions are welcome.

QDRV Script

select cast((library_name)as char(15)) as LIBRARY_NAME, -
cast((DRIVE_NAME)as char(16)) as DRIVE_NAME, -
cast((drive_state)as char(10)) as DRIVE_STATE, -
cast((volume_name)as char(8)) as VOL_NAME, cast((online)as char(10)) as ONLINE, -
cast((ALLOCATED_TO)as char(20)) as DRV_OWNER from drives order by library_name, drive_name







QDRV Macro

DEFINE SCRIPT  QDRV DESC="Show tape drive status"
UPDATE SCRIPT  QDRV "select cast((library_name)as char(15)) as LIBRARY_NAME, -"
UPDATE SCRIPT  QDRV "cast((DRIVE_NAME)as char(16)) as DRIVE_NAME, -"
UPDATE SCRIPT  QDRV "cast((drive_state)as char(10)) as DRIVE_STATE, -"
UPDATE SCRIPT  QDRV "cast((volume_name)as char(8)) as VOL_NAME, cast((online)as char(10)) as ONLINE, -"
UPDATE SCRIPT  QDRV "cast((ALLOCATED_TO)as char(20)) as DRV_OWNER from drives order by library_name, drive_name"





QMNT Script

select varchar(a.library_name,15) as LIB_NAME, -
cast((a.DRIVE_NAME)as char(16)) as DRIVE_NAME, -
cast((a.drive_state)as char(10)) as DRIVE_STATE, -
cast((a.volume_name)as char(8)) as VOL_NAME, -
varchar(b.device,12) as device, cast((a.online)as char(10)) as ONLINE, -
cast((a.ALLOCATED_TO)as char(20)) as DRV_OWNER from drives a, paths b -
where a.library_name=b.library_name and a.drive_name=b.destination_name -
and b.source_name=a.ALLOCATED_TO order by a.drive_name



QMNT Macro

DEFINE SCRIPT  QMNT DESC="Show tape mount status"
UPDATE SCRIPT  QMNT "select varchar(a.library_name,15) as LIB_NAME, -"
UPDATE SCRIPT  QMNT "cast((a.DRIVE_NAME)as char(16)) as DRIVE_NAME, -"
UPDATE SCRIPT  QMNT "cast((a.drive_state)as char(10)) as DRIVE_STATE, -"
UPDATE SCRIPT  QMNT "cast((a.volume_name)as char(8)) as VOL_NAME, -"
UPDATE SCRIPT  QMNT "varchar(b.device,12) as device, cast((a.online)as char(10)) as ONLINE, -"
UPDATE SCRIPT  QMNT "cast((a.ALLOCATED_TO)as char(20)) as DRV_OWNER from drives a, paths b -"
UPDATE SCRIPT  QMNT "where a.library_name=b.library_name and a.drive_name=b.destination_name -"
UPDATE SCRIPT  QMNT "and b.source_name=a.ALLOCATED_TO order by a.drive_name"

Q MOUNT Research

$
0
0
So in yesterday's post I spoke of a "better Q MOUNT" using a script I wrote. The problem is the Q MOUNT command does not reference a accessible TSM table. It appears to gathers the information on the mount with the associated process or session from various tables (as seen with Q MOUNT F=D). So I became more intrigued with how TSM gathers the info and decided to bypass TSM all together and research on the DB2 side. Well out of 779 tables in the TSM 7.1 DB2 database I did not see any with MOUNT in the name (you can find a list of all the TSM DB 7.1 tables here). There definietle some tables that appear in the list that are hidden from TSM or unused, but I queried a number of tables looking for any that had fields (columns) that correlated to the Q MOUNT command. The closest I came was the MMS_DRIVE table which has the following fields.

$ db2 "select char(COLNAME,24) as COLNAME,char(TYPENAME,24)as DATATYPE,LENGTH,SCALE from syscat.columns where tabname='MMS_DRIVES' and tabschema='TSMDB1'"

COLNAME                  DATATYPE                 LENGTH      SCALE
------------------------ ------------------------ ----------- ------
ACSDRVID                 VARCHAR                          126      0
CLEANFREQ                INTEGER                            4      0
DEVICE                   VARCHAR                           65      0
DEVTYPE                  INTEGER                            4      0
DEV_INQ                  SMALLINT                           2      0
DRIVENAME                VARCHAR                           31      0
ELEMENT                  INTEGER                            4      0
INQUIRY                  VARCHAR                         1025      0
KBYTES_PROC              BIGINT                             8      0
KBYTES_PROC_HI           INTEGER                            4      0
LIBNAME                  VARCHAR                           31      0
ONLINE                   INTEGER                            4      0
OWNER                    VARCHAR                           65      0
OWNERVOL                 VARCHAR                         1025      0
RD_FORMAT                BIGINT                             8      0
SERIAL                   VARCHAR                           65      0
UPDATE_DATE              TIMESTAMP                         10      6
UPDATOR                  VARCHAR                           65      0
WR_FORMAT                BIGINT                             8      0
WWN                      VARCHAR                           17      0


There should be a table that would show the drive, what tape is mounted, what process or session it's assigned and who owns it, just like a Q MOUNT F=D. I haven't found it yet.

NOTE: Of the 779 tables I listed a more refined search shows 154 tables that correspond to what we seem to be able to access from the TSM admin command line.

TSM 7.1 Discussion

$
0
0
I was asked by a former colleague to start a discussion on the merits of TSM 7.1. I use TSM 7.1 currently on my newer TSM servers and don't see a huge difference from our 6.4 servers. I know there are some features and updates with 7.1 but none that impact me other than any possible performance enhancements.

So tell me why you upgraded to TSM 7.1 or what features made it a "No Brainer" for you to implement 7.1. I'd like to hear from you.

Best Desktop Linux Distro

$
0
0
Ok not a TSM subject, but I am running Linux (Kubuntu 15.04) on an old Sony laptop and while I like it, something is missing. I want a launcher bar but didn't like Unity. Anyone out there suggest a distro that they like? I've used Mint with Cinnamon and Mate but thought I'd try KDE again after the new release.  I don't particularly like GNOME but....

Article 0

$
0
0

RHEL 6.6, non IBM devices and UDEV rules

Few days back we had to move one of our TSM servers and its library - which is a shared one. Did it before with TSM version 5.x and everything was fine that day. Now we are on the 7.1 and I have found that even the SANDISCOVERY found the new path correctly (and updated it on the Library Manager), drives were not working.
TSM 6.x+ server does not run under "root" account anymore, but when the new device is discovered, its device file is created as RW for root only.
Normaly (when you manually add the new device) you update the device files privileges and create the links using "/opt/tivoli/tsm/devices/bin/autoconf -a", but this does not work automatically.

On the TSM Symposium 2013 in Berlin there was a great presentation "Tape configuration for TSM" by Bruno Friess which (among other things) solves this situation by using persistent names via "udev".
http://www.exstor.de/wp-content/uploads/2013/09/tape_config_for_tsm.pdf
 
It only has a 2 minor limitations
a) Linux variant mentioned is SLES
b) lacks details for Linux, udev and non-IBM devices

As our Linux servers run on RHEL amd our libraries are Overland brand with HP tape drives, I had to modify the steps a bit:
a) udevadm info .... does not display serial numbers for HP tape drives and Overland libraries - you need sg3_utils package for having "sginfo" command to get the device serial numbers
b) links are created as /dev/library1 and /dev/drive1 (2, 3) pointing to corresponding /dev/sgX device with crw-rw-rw permissions
c) file below was created as /etc/udev/rules.d/71-persistent-tape.rules
# This file shoud create persistent devices for HP tapes and libraries used by TSM

# Known serial numbers
# <library1>    LIBRSN1234
# <drive1>      DRV1SN3456
# <drive2>      DRV2SN4567
# <drive3>      DRV3SN5678

KERNEL=="sg*", SUBSYSTEM=="scsi_generic", SYSFS{type}=="8", PROGRAM="/usr/bin/sginfo -s /dev/%k", RESULT=="*LIBRSN1234*", SYMLINK+="library1",MODE="0666"
KERNEL=="sg*", SUBSYSTEM=="scsi_generic", PROGRAM="/usr/bin/sginfo -s /dev/%k", RESULT=="*DRV1SN3456*", SYMLINK+="drive1",MODE="0666"
KERNEL=="sg*", SUBSYSTEM=="scsi_generic", PROGRAM="/usr/bin/sginfo -s /dev/%k", RESULT=="*DRV2SN4567*", SYMLINK+="drive2",MODE="0666"
KERNEL=="sg*", SUBSYSTEM=="scsi_generic", PROGRAM="/usr/bin/sginfo -s /dev/%k", RESULT=="*DRV3SN5678*", SYMLINK+="drive3",MODE="0666"
Many thanks to Bruno and hope this helps someone.

Remote Site Backup

$
0
0
Ok here is the scenario, we have a 2.8TB Windows file server at a remote location and we have to back it up over the WAN.  We can't put a TSM server locally due to many issues (one is that it's not worth wasting resources on 2-3 servers). Due to throughput the backup was calculated to possibly take 25+ days to complete. So the idea I came up with was to copy the data onto a 4TB USB  drive and then ship the drive to our Data Center and attach it to another Windows server as the same drive name/label and the drive has at the remote site, setup a dsm.opt with the node name of the remote server and run the backup. The hope is that then when we run the backup remotely it will see all it's data as already backed up and then pickup with incremental backups. Will it work? Anyone tried it? I'm worried there are too many file variables involved and it will still try and backup the files all over again.

TSM (Spectrum Protect) Symposium 2015, Dresden again

Value Replacement in Select Statements

$
0
0
When working with the TSM DB there are times you might want to change/replace a returned value for easier reporting.  I am finding that with TSM now using DB2 more SQL functions are available to use when trying to gather information from the DB using select statements. There are a couple functions available to facilitate this which I will discuss below. The first one to cover is the CASE function (available in TSM pre-DB2) which also has the benefit of allowing flow logic to your SELECT statement. I first came across this when I found a Q EVENT macro someone had created on a TSM server I inherited.

def script event-check desc="Events - Exceptions"
upd script event-check "/* ---------------------------------------------*/"
upd script event-check "/* Script Name: event-check                          */"
upd script event-check "/* ---------------------------------------------*/"
upd script event-check '  select -'
upd script event-check '  schedule_name, -'
upd script event-check '   cast(SUBSTR(CHAR(actual_start),12,8) as char(8)) AS START, - '
upd script event-check '   node_name, -'
upd script event-check '   cast(status as char(10)) as "STATUS", -'
upd script event-check '    case -'
upd script event-check "      when result=0  then ' 0-Succ' -"
upd script event-check "      when result=4  then ' 4-SkFi' -"
upd script event-check "      when result=8  then ' 8-Warn' -"
upd script event-check "      when result=12 then '12-Errs' -"
upd script event-check "      else cast(result as char(7)) -"
upd script event-check '    end -'
upd script event-check '      as "RESULT" -'
upd script event-check '  from events -'
upd script event-check '  where scheduled_start<=(current_timestamp - 24 hours) -'
upd script event-check '    and result<>0 and node_name is not NULL'



As you can see in the macro we replace the basic result with some extra information to help identify what happened with the backup. The CASE function also comes in handy when changing the values for BYTES to the appropriate result of K, MB, GB, TB etc.

.... 
CASE -  
    WHEN bytes>1099511627776 THEN CAST(DEC(bytes)/1024/1024/1024/1024 AS DEC(5,1))||' TB' -
    WHEN bytes>1073741824 THEN CAST(DEC(bytes)/1024/1024/1024 AS DEC(5,1))||' GB' -

    WHEN bytes>1048576 THEN CAST(DEC(bytes)/1024/1024 AS DEC(5,1))||' MB' -
    WHEN bytes>1024 THEN CAST(DEC(bytes)/1024 AS DEC(5,1))||' KB' -
  ELSE -

    CAST(bytes AS DEC(5,0))||' B' -
  END AS bytes, -                                                          

 There is also another option for changing results when the field has a NULL value. When created a script that might encounter NULL values rather than return nothing you can use the NVL and NVL2 operators. For example say you want to create a report that shows all nodes and specifies whether they have ever connected to TSM. With TSM this can be determined not by the LASTACC_TIME but by the PLATFORM_NAME value. If PLATFORM_NAME is NULL then the node has never connected to TSM to set the value. So we can create a script that looks at the PLATFORM_NAME value and returns a value of Has Not Connected.

select varchar(node_name,40) as "Node Name", nvl(platform_name, 'Has Not Connected') AS "Platform Name" from nodes order by platform_name desc

Node Name                                     Platform Name    
-----------------------------------------     ------------------
HUCDSON.ADCC.W.ZZ.OU812.NET.SQL               Has Not Connected
DERELICT.ADCC.W.ZZ.OU812.NET.SQL              Has Not Connected
NOSTROMO.ADCC.W.ZZ.OU812.NET.MAIL             Has Not Connected
SULACO.ADCC.AIX.ZZ.OU812.NET.SQL              Has Not Connected
WIN-SEC-JMP-VH.ADCC.W.ZZ.OU812.NET            Has Not Connected
HICKS.ADCC.AIX.ZZ.OU812.NET.SQL               Has Not Connected
VASQUEZ.DLX.W.CS.AS19229.NET                  WinNT            
BISHOP.ULX.W.CS.AS19229.NET                   WinNT            
GORMAN.LV426.W.CS.AS19229.NET                 WinNT            
SPUNKMEYER.LV426.W.CS.AS19229.NET             WinNT 

...

The results work but what if you want to substitute results for PLATFORM_NAME when the value is not NULL?  You can use the NVL2 operator for this and return a list of the nodes showing whether they are ACTIVE or INACTIVE.

select varchar(node_name,40) as NODE_NAME, nvl2(platform_name, 'Active Node', 'Inactive') as STATUS from nodes order by Platform_name desc

Node Name                                     Platform Name  
-----------------------------------------     ----------------
HUCDSON.ADCC.W.ZZ.OU812.NET.SQL               Inactive       
DERELICT.ADCC.W.ZZ.OU812.NET.SQL              Inactive       
NOSTROMO.ADCC.W.ZZ.OU812.NET.MAIL             Inactive       
SULACO.ADCC.AIX.ZZ.OU812.NET.SQL              Inactive       
WIN-SEC-JMP-VH.ADCC.W.ZZ.OU812.NET            Inactive       
HICKS.ADCC.AIX.ZZ.OU812.NET.SQL               Inactive       
VASQUEZ.DLX.W.CS.AS19229.NET                  Active         
BISHOP.ULX.W.CS.AS19229.NET                   Active         
GORMAN.LV426.W.CS.AS19229.NET                 Active         
SPUNKMEYER.LV426.W.CS.AS19229.NET             Active
        
...

So ALL values that were not NULL were reported as Active. These functions can be very helpful when you need a specific report type. If you have other examples or operators you find useful feel free to leave a note in the comment section and I can add them to this post with your reference.

                  

TSM (Spectrum Protect) Symposium 2015, Dresden (in pictures)

$
0
0
We made it:

About 340 people!!!

 Norbert Pott (IBM Germany), Tommy Hueber (Rocket Software, The Netherlands, http://www.tsmblog.org/), me

Matt Anglin (IBM USA)

Who always knows everything and tells the truth!

Zsolt Fekete (SCSS Kft., Hungary), me and Paul Oh (Sentia Solutions Inc., Canada)
.
.
.



Oracle RMAN Catalogue Cleanup - Revisited

$
0
0
Over a Decade ago I wrote an article on Object cleanup due to the issues with RMAN and our DBA's not keeping it sync'd with TSM. I then revisited it in 2007 and since then have not had to use it much. With version 6 of TSM I have not worried about my DB size, or old data not expiring since TSM seems to handle it somewhat better than the older versions did. I recently had to use the command and realized that the delete object command as it was used in TSM 5 is not 100% correct for TSM 6. So when trying to use the old command

TSM v5

delete object 0 [Object ID Number]
The 0 is telling TSM how many dependent objects to delete (at least I believe that's what it does, I can't fully remember). But with TSM 6 you don't need to provide the dependent object count in the command.

TSM v6 and higher

delete object [Object ID Number]

This command works (although it can sometimes take awhile to complete the command). So using the correct select you can produce a list and delete the objects from TSM. Reclamation with return the space but my understanding is that this is an individual expiration process and remember it is not a command support will help you with. USE AT YOUR OWN RISK!

Here is the select:

select 'delete object', object_id from backups where node_name=[TDP NODENAME] and
backup_date < '2007-06-01 00:00:00'

Find WWN's in AIX

$
0
0
So I use the following script to find WWN's in AIX. Does anyone have a better script they'd like to share?

#!/usr/bin/ksh

CSV="," 

for FCSX in `lscfg | grep fcs | awk '{ print $2 }' | sort`
do 
echo ${FCSX}${CSV}`lscfg -vl ${FCSX} | grep "Network Address" | sed -e "s/^.*\.//"`${CSV}`lscfg -l ${FCSX} | awk '{ print $2 }'` 
done


DR Test - Things learned

$
0
0
I just did a DR test from one data center to another involving TSM and our Data Domain (DD) which we have configured for NFS and VTL usage. Things to know...


  1. We backup the TSM DB to the DD NFS file system
  2. The TSM server was not brought up on its own LPAR in the DR site, but shared with an alternate TSM instance.
  3. The DR site could not facilitate LAN-Free like the primary site.

So we built the secondary instance on the LPAR currently running a TSM server that services the customer's development environment. Then I disabled the replication pair and we mounted it to the LPAR so we could restore the TSM DB. This is where our main problem rose it's head. The NFS file system from the DD was mounting under the primary TSM instances ID, So while we wrestled for this for an hour or so, I realized after Googling the issue and reading the DD notes from people that the problem was the configuration. I would have been fine disabling the replication pair and mounting it to the TSM LPAR if it had been the default user ID, but the primary instance was the owner and we could not change permissions due to what is allowed by the default ID and settings from the DD. So I had to unmount the DD NFS file system to delete the pair on the DD then remount it with the full read/write permissions. I was then able to mount it under an alternate ID. Once we overcame this we were able to start the TSM DB restore which is where our second issue arose.
We were restoring the TSM DB and the active logs were not being restored to the active log directory. The first time I used dsmserv restore db and it ran fine until all the DB records were restored and I received the following error:

ANR2970E Database rollforward terminated - DB2 sqlcode -1004 sqlerrmc TSMDB1

The restore process restored the logs to the instances home directory eventually filling the filesystem to 100% and erroring out. I thought the logs were recovery log related so I then added the RECOVERYLOGDir option to the restore command and got the same results. This wasted an hour to achieve the same results, so after some more Google searches and talking to IBM support I decided to add the ACTIVELOGDIR option to the restore. I didn't add it due to the IBM support tech suggesting it (he didn't) I just realized recovery log was not filled with any logs and the only other logs they could be are Active Log files. I added the ACTIVELOGDirectory option to the restore command and DB restored worked without any errors. The question is why didn't TSM use the ACTIVELOGDirectory option stated in the dsmserv.opt? The RECOVERYLOGDir option was used but the log for recovery were never more than maybe 1GB, but the active log was over 53GB and the db2diag.0.log registered the error that no recovery log directory was listed so the default would be used. What the hell??? It is listed in the dsmserv.opt...

ACTIVELOGDirectory          /drtsmserver/tsm30log
ARCHLOGDirectory            /drtsmserver/tsm30arch
MAXSESS 300
COMMTIMEOUT 6000
IDLETIMEOUT 6000
MAXSESSIONS 400
...

So I post this so you can learn from my mistakes. The final restore DB command was

dsmserv restore db on=db.list recoverydir=/drtsmserver/tsm30fail activelogdir=/drtsmserver/tsm30log

Restoring TSM Without A Volhist

$
0
0
Someone in the comments to an old post just asked for directions/instructions on restoring TSM without a volume history or devconfig file. Well, I got some bad news and some not so bad but not fun news. We will start with the not so bad news. If you don't have a devconfig, don't panic! You can recreate the devconfig. That's fairly simple, just a pain. TSM has to have a devconfig file to initialize its devices so if the devconfig is not present you'll have to create one. Typically you do this when you rebuild a TSM instance. For example at a DR site you install TSM on the DR server, define the dsmserv.opt, and then you define base devices on the new install. Once that has been done you can bring down TSM and attempt a restore using the newly defined device(s). 

Now for the bad news. Without the volhist, if you don't know what volume(s) were used for DBBAckup your kind of screwed. The old DSMSERV DISPLAY DBBACKUPVOLUME command has been removed/deleted and IBM now says the following

DSMSERV DISPLAY DBBACKUPVOLUME - Information about volumes used for database backup is available from the volume history file. The volume history file is now required to restore the database. 

You can find a list of TSM Server deleted commands, utilities, and options at the following link.


TSM Explorer

$
0
0
I've been notified by the developer of TSMExplorer that a more current free edition is available for anyone looking for a GUI based management tool for TSM. Below is a brief note from the developer.

"TSMExplorer GUI is  free application for TSM server management. The solution is a comfortable tool to control and manage from a single sign-on. This version is free for works with  version TSM 5.x 6.1 6.2”


Why Are You Not Using Google & YouTube?

$
0
0
I had an OS Admin contact me through LinkedIn and G-Mail asking for help on trying to find his archived data. He didn't have much experience with TSM and was looking for information on how to find the long term backups (i.e. Archives). I asked him if he even tried to search Google? Google and YouTube are great resources for all your needs. For example if you want  to learn more about the new Operations Center you can see a plethora of videos by searching YouTube. You can also use google to find all sorts of related documents and pages when it concerns APARs and errors. If you haven't done your due diligence you make yourself look dumb. Shoot you can search Google from this website!

So here is a list of videos you can reference:

Backup and Archive

Server Administration

Setup TSM Deduplication

Tivoli Data Protection Agents

TDP for Virtual Environments






Why Are You Not Using Google & YouTube?

$
0
0
I had an OS Admin contact me through LinkedIn and G-Mail asking for help on trying to find his archived data. He didn't have much experience with TSM and was looking for information on how to find the long term backups (i.e. Archives). I asked him if he even tried to search Google? Google and YouTube are great resources for all your needs. For example if you want  to learn more about the new Operations Center you can see a plethora of videos by searching YouTube. You can also use google to find all sorts of related documents and pages when it concerns APARs and errors. If you haven't done your due diligence you make yourself look dumb. Shoot, you can even search Google from my website and get my posts and outside relevant web pages!

So here is a list of YouTube videos you can reference:

Backup and Archive

Server Administration

Setup TSM Deduplication

Tivoli Data Protection Agents

TDP for Virtual Environments






Viewing all 85 articles
Browse latest View live