poniedziałek, 22 października 2012

Grandfather-father-son


Backup rotation scheme

From Wikipedia, the free encyclopedia
backup rotation scheme refers to a system of backing up data to computer media (such as tapes) that minimizes, via re-use, the number of media used. The scheme determines how and when each piece of removable storage is used for a backup job and how long it is retained once it has backup data stored on it. Different techniques have evolved over time to balance data retention and restoration needs with the cost of extra data storage media. Such a scheme can be quite complicated if it takes incremental backups, multiple retention periods, and off-site storage into consideration.

Contents

  [hide

[edit]Schemes

[edit]First In, First Out

First In, First Out (FIFO) backup scheme saves new or modified files onto the oldest media in the set. Performing a daily backup onto a set of 14 media, the backup depth would be 14 days. Each day, the oldest media would be inserted when performing the backup. This is the simplest rotation scheme, and is usually the first to come to mind. It was commonly used when floppy disks were used as backup media.
Advantages of the FIFO scheme include:
  • Used to keep the longest possible tail of daily backups
  • Used when archived backups are not as important (i.e. no need to go back one year)
  • Useful when data before the rotation period is irrelevant
This scheme, however, suffers from the possibility of data loss. To understand why, consider a file in which an unsuspected error is introduced. Several generations of backups and revisions have since occurred. The error is then detected. At this time, it would be pointless to have all of the most recent generations because all of them have the error. It would instead be beneficial to have at least one of the older generations, as it would not have the error.

[edit]Grandfather-father-son

Grandfather-father-son backup refers to a common rotation scheme for backup media. Originally designed for tape backup, it works well for any hierarchical backup strategy. The basic method is to define three sets of backups, such as daily, weekly and monthly. The daily, or son, backups are rotated on a daily basis with one graduating to father status each week. The weekly or father backups are rotated on a weekly basis with one graduating to grandfather status each month. In addition, quarterly, biannual, and/or annual backups can also be separately retained. Often one or more of the graduated backups is removed from the site for safekeeping and disaster recovery purposes.

[edit]Tower of Hanoi

The Tower of Hanoi rotation method is more complex. It is based on the mathematics of the Tower of Hanoi puzzle, with what is essentially a recursive method. It is a 'smart' way of archiving an effective number of backups as well as the ability to go back over time, but it is more complex to understand. Basically, every tape is associated with a disk in the puzzle, and every disk movement to a different peg corresponds with a backup to that tape. So the first tape is used every other day (1, 3, 5, 7, 9,...), the second tape is used every fourth day (2, 6, 10, ...), the third tape is used every eighth day (4, 12, 20, ...).[1]
A set of n tapes (or tapes sets) will allow backups for n-1 days before the last set is recycled. So, three tapes will give four days worth of backups and on the fifth day Set C will be overwritten; four tapes will give eight days, and Set D is overwritten on the ninth day; five tapes will give 16 days, etc. Files can be restored from 1, 2, 4, 8, 16, ..., n - 1 days ago.[2] Mathematically, you can look at the sequence of the binary notation of the day number. In each step, the number of zeros on the (right) end of the number determines the tape number to use.
The following tables show which tapes are used on which days of various cycles. Note that the Towers of Hanoi rotation method has the drawback of overwriting the very first backup (day 1 of the cycle) after only two days. However, this can easily be overcome by starting on the last day of a cycle (marked in red in the tables).

[edit]Three-tape Hanoi schedule

Day of the Cycle
0102030405060708
SetAAAA
BB
CC

[edit]Four-tape Hanoi schedule

Day of the Cycle
01020304050607080910111213141516
SetAAAAAAAA
BBBB
CC
DD

[edit]Five-tape Hanoi schedule

Day of the Cycle
0102030405060708091011121314151617181920212223242526272829303132
SetAAAAAAAAAAAAAAAA
BBBBBBBB
CCCC
DD
EE

[edit]Weighted random approach

An alternative approach to keeping generations distributed across all points in time is to delete (or overwrite), past generations (except the oldest and the most-recent-ngenerations) when necessary in a weighted-random fashion. For each deletion, the weight assigned to each of the deletable generations is the probability of it being deleted. One acceptable weight is a constant exponent (possibly the square) of the multiplicative inverse of the duration (possibly expressed in the number of days) between the date of the generation and the generation available before it.
Using a larger exponent leads to a more uniform distribution of generations, whereas a smaller exponent lead to a distribution with more recent and fewer older generations. This technique probabilistically ensures that past generations are always distributed across all points in time as desired.

[edit]Incremented media method

This method has many variations and names. A set of numbered media is used until the end of the cycle. Then the cycle is repeated using media numbered the same as the previous cycle, but incremented by one. The lowest numbered tape from the previous cycle is retired and kept permanently. Thus, one has access to every backup for one cycle, and one backup per cycle before that. This method has the advantage of ensuring even media wear, but requires a schedule to be precalculated. The system is generally too complex to mentally calculate the next media to be used.

sobota, 13 października 2012

Lotus Notes ADSync

http://www.ibm.com/developerworks/lotus/library/domino-adsync/

sobota, 6 października 2012

NAGIOS - switch FC Brocade

http://blog.barfoo.org/2009/11/23/monitoring-brocade-fc-switches-with-nagios/

NAGIOS - Bladecenter


Monitoring the IBM BladeCenter chassis with Nagios

Today I ended up working out the details on what we want to monitor regarding our BladeCenter. The most interesting details (for us that is) are these:
  • Fan speeds for Chassis Cooling/Power Module Cooling Bay(s)
  • Temperature
  • Power Domain utilization
It wasn’t *that* hard to implement. Only trouble(s) I ran into, were (1) IBM did a real shitty job with theMIB’s. If you look closely into the mmblade.mib, you’re gonna notice, that not a single OID is specified for the events. (2) As the MIB’s weren’t documented anywhere, I had to look them up via snmpwalk (which I had never used before). So as a reminder (to myself), here’s how it is done:
1
snmpwalk -v1 -c public -O n 10.0.0.35 .1.3.6.1.4.1.2.3.51.2.2
This will get you a list, with a lot of output (5154 lines to be exact). Lucky me, the web interface of the management module/ssh interface is rather verbose, so all you need to do is compare those values with what you are looking for.
So for myself (and anyone interested) read ahead for the list of checks we are currently running on the management module.
001
002
003
004
005
006
007
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
086
087
088
089
090
091
092
093
094
095
096
097
098
099
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
define command {
  command_name      check_snmpv1
  command_line      /usr/lib/nagios/plugins/check_snmp -C public \
                             -H $HOSTADDRESS$ -o $ARG1$ -w $ARG2$ \
                             -c $ARG3$ -l $ARG5$ -u $ARG4$
}
 
define host {
  use                   generic-network
  host_name             bc-mgmt1
  alias                 bc-mgmt1.home.barfoo.org
  address               10.0.0.35
  parents               uni-greif-05
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Temperature
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.1.5.1.0!\
                                  33!38!C!Input temperature
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Chassis Cooling - Bay 1
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.3.20.0!\
                                  1600:1200,2100:2600!1200:0,2600:3000!RPM!\
                                  Chassis Cooling - Bay 1
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Chassis Cooling - Bay 2
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.3.21.0!\
                                  1600:1200,2100:2600!1200:0,2600:3000!RPM!\
                                  Chassis Cooling - Bay 2
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 1
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.6.1!\
                                  6200:5400,6700:7000!5300:0,7000:7500!RPM!\
                                  Power Module Cooling - Bay 1
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 1 Fans
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.4.1!\
                                  2:1!1:0!Fans present!\
                                  Power Module Cooling - Bay 1
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 2
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.6.2!\
                                  6200:5400,6700:7000!5300:0,7000:7500!RPM!\
                                  Power Module Cooling - Bay 2
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 2 Fans
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.4.2!\
                                  2:1!1:0!Fans present!\
                                  Power Module Cooling - Bay 2
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 3
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.6.3!\
                                  6200:5400,6700:7000!5300:0,7000:7500!RPM!\
                                  Power Module Cooling - Bay 3
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 3 Fans
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.4.3!\
                                  2:1!1:0!Fans present!\
                                  Power Module Cooling - Bay 3
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 4
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.6.2!\
                                  6200:5400,6700:7000!5300:0,7000:7500!RPM!\
                                  Power Module Cooling - Bay 4
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Module Cooling - Bay 4 Fans
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.6.1.1.4.2!\
                                  2:1!1:0!Fans present!\
                                  Power Module Cooling - Bay 4
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Domain 1: Utilization
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.10.1.1.1.10.1!\
                                  2600:2400!2880:2600!W!\
                                  Power Domain 1: Utilization
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}
 
define service {
  use                   generic-service
  host_name             bc-mgmt1
  service_description   Power Domain 2: Utilization
  check_command         check_snmpv1!.1.3.6.1.4.1.2.3.51.2.2.10.1.1.1.10.2!\
                                  2600:2400!2880:2600!W!\
                                  Power Domain 2: Utilization
  action_url            /pnp/index.php?host=$HOSTNAME$&srv=$SERVICEDESC$
  notes                 View PNP RRD grap
}