INFORMATION ON USING BAD RAM MODULES
====================================

Initial note:
	Please read through this entire document at least once. It gives you
	a rough outline of what BadMEM is and how you must apply it!

Introduction
	RAM is getting smaller and smaller, and as a result, also more and more
	vulnerable. This makes the manufacturing of hardware more expensive,
	since an excessive amount of RAM chips must be discarded on account of
	a single cell that is wrong. Similarly, static discharge may damage a
	RAM module forever, which is usually remedied by replacing it
	entirely.

	This is not necessary, as the BadMEM code shows: By informing the Linux
	kernel which addresses in a RAM are damaged, the kernel simply avoids
	ever allocating such addresses but makes all the rest available.

Reasons for this feature
	There are many reasons why this kernel feature is useful:
	 - Chip manufacture is resource intensive; waste less and sleep better
	 - It's another chance to promote Linux as "the flexible OS"
	 - Some laptops have their RAM soldered in... and then it fails!
	 - It's plain cool ;-)

Requirements
        This patch needs the badmem-utils-package (version 1.3 or above) to be 
	compiled correctly. You can download this package 
	from http://badmem.sourceforge.net.

Running example
	To run this project, I was given two DIMMs, 32 MB each. One, that we
	shall use as a running example in this text, contained 512 faulty bits,
	spread over 1/4 of the address range in a regular pattern. Some tricks
	with a RAM tester and a few binary calculations were sufficient to
	write these faults down in 2 longword numbers.

	The kernel recognised the correct number of pages with faults and did
	not give them out for allocation. The allocation routines could
	therefore progress as normally, without any adaption.
	So, I gained 30 MB of DIMM which would otherwise have been thrown
	away. After booting the kernel, the kernel behaved exactly as it
	always had.

Initial checks
	If you experience RAM trouble, first read /usr/src/linux/memory.txt
	and try out the mem=4M trick to see if at least some initial parts
	of your RAM work well. The BadMEM routines halt the kernel in panic
	if the reserved area of memory (containing kernel stuff) contains
	a faulty address.

Running a RAM checker
	The memory checker is not built into the kernel, to avoid delays at
	runtime. If you experience problems that may be caused by RAM, run
	a good RAM checker, such as
		http://reality.sgi.com/cbrady_denver/memtest86
	The output of a RAM checker provides addresses that went wrong. In
	the 32 MB chip with 512 faulty bits mentioned above, the errors were
	found in the 8MB-16MB range (the DIMM was in slot #0) at addresses
		xxx42f4
		xxx62f4
		xxxc2f4
		xxxe2f4
	and the error was a "sticky 1 bit", a memory bit that stayed "1" no
	matter what was written to it. The regularity of this pattern
	suggests the death of a buffer at the output stages of a row on one of
	the chips. I expect such regularity to be commonplace. Finding this
	regularity currently is human effort, but it should not be hard to
	alter a RAM checker to capture it in some sort of pattern, possibly
	the BadMEM patterns described below.

	By the way, if you manage to get hold of memtest86 version 2.3 or
	beyond, you can configure the printing mode to produce BadMEM patterns,
	which find out exactly what you must enter on the LILO: commandline,
	except that you shouldn't mention the added spacing. That means that
	you can skip the following step, which saves you a *lot* of work.

Capturing errors in a pattern
	Instead of manually providing all 512 errors to the kernel, it's nicer
	to generate a pattern. Since the regularity is based on address decoding
	software, which generally takes certain bits into account and ignores
	others, we shall provide a faulty address F, together with a bit mask M
	that specifies which bits must be equal to F. In C code, an address A
	is faulty if and only if
		(F & M) == (A & M)
	or alternately (closer to a hardware implementation):
		~((F ^ A) & M)
	In the example 32 MB chip, we had the faulty addresses in 8MB-16MB:
		xxx42f4		....0100....
		xxx62f4		....0110....
		xxxc2f4		....1100....
		xxxe2f4		....1110....
	The second column represents the alternating hex digit in binary form.
	Apperantly, the first and one-but last binary digit can be anything,
	so the binary mask for that part is 0101. The mask for the part after
	this is 0xfff, and the part before should select anything in the range
	8MB-16MB, or 0x00800000-0x01000000; this is done with a bitmask
	0xff80xxxx. Combining these partial masks, we get:
		F=0x008042f4	M=0xff805fff
	That covers everything for this DIMM; for more complicated failing
	DIMMs, or for a combination of multiple failing DIMMs, it can be
	necessary to set up a number of such F/M pairs.

Rebooting Linux
	Now that these patterns are known (and double-checked, the calculations
	are highly error-prone... it would be neat to test them in the RAM
	checker...) we simply restart Linux with these F/M pairs as a parameter.
	If you normally boot as follows:
		LILO: linux
	you should now boot with
		LILO: linux badmem=0x008042f4,0xff805fff
	or perhaps by mentioning more F/M pairs in an order F0,M0,F1,M1,...
	Please note here that you must *NOT* have chosen 
	    
	    Extended Module support
        
	to pass this type of badmem command line to the kernel. If you like to
	use this advanced way of configuration setting, please read the
	
	    Module Configuration
	
	section below.
	When you provide an odd number of arguments to BadMEM, the default mask
	0xffffffff (only one address matched) is applied to the pattern.
	
	Beware of the commandline length. At least up to LILO version 0.21,
	the commandline is cut off after the 78th character; later versions
	may go as far as the kernel goes, namely 255 characters. In no way is
	it possible to enter more than 10 numbers to the BadMEM boot option.

	When the kernel now boots, it should not give any trouble with RAM.
	Mind you, this is under the assumption that the kernel and its data
	storage do not overlap an erroneous part. If this happens, and the
	kernel does not choke on it right away, it will stop with a panic.
	You will need to provide a RAM where the initial, say 2MB, is faultless.

	Now look up your memory status with
		dmesg | grep ^Memory:
	which prints a line much like
		Memory: 158524k/163840k available
			(940k kernel code,
			 412k reserved,
			 1856k data,
			 60k init,
			 2048k badram)
	The latter entry, the badram, is 2048k to represent the loss of 2MB
	of general purpose RAM due to the errors. Or, positively rephrased,
	instead of throwing out 32MB as useless, you only throw out 2MB.

	If the system is stable (try compiling a few kernels, and do a few
	finds in / or so) you may add the boot parameter to /etc/lilo.conf
	as a line to _all_ the kernels that handle this trouble with a line
		append="badmem=0x008042f4,0xff805fff"
	after which you run "lilo".
	Warning: Don't experiment with these settings on your only boot image.
	If the BadMEM overlays kernel code, data, init, or other reserved
	memory, the kernel will halt in panic. Try settings on a test boot
	image first, and if you get a panic you should change the order of
	your DIMMs [which may involve buying a new one just to be able to
	change the order].

BadRAM classification
	This technique may start a lively market for "dead" RAM. It is important
	to realise that some RAMs are more dead than others. So, instead of
	just providing a RAM size, it is also important to know the BadRAM
	class, which is defined as follows:
	
		A BadRAM class N means that at most 2^N bytes have a problem,
		and that all problems with the RAMs are persistent: They
		are predictable and always show up.

	The DIMM that serves as an example here was of class 9, since 512=2^9
	errors were found. Higher classes are worse, "correct" RAM is of class
	-1 (or even less, at your choice).
	Class N also means that the bitmask for your chip (if there's just one,
	that is) counts N bits "0" and it means that (if no faults fall in the
	same page) an amount of 2^N*PAGESIZE memory is lost, in the example on
	an i386 architecture that would be 2^9*4k=2MB, which accounts for the
	initial claim of 30MB RAM gained with this DIMM.

	An alternative definition called "The BadRAM-4096 Specification" is
	available from
	
	     http://webrum.uni-mannheim.de/math/schmoigl/linux/

Further information on the BadMEM development process 
	For further information on the programming progress, please visit

	     http://badmem.sourceforge.net
	     
Module Configuration - the new way
        If you have complex holes in your memory and must configure many
        things via the LILO append line, it is very likely that it is not long
	enough. Although you have approx. 255 characters, it is not much for
	BadMEM. Therefore there is a new configuration way. To enable this, you
	must have selected 
	
	     Extended Module support
	     
	     in General setup / BadMEM-patch
	     
        IMPORTANT NOTE: Never -- really never -- parse normal command lines
	like
	
	    "badmem=0x0080fc04,0xffff4000"
	    
	to a kernel with Extended Module support! This will make the kernel
	die during start up phase. For further configuration information please
	read the file
	
	     badmem_conf.txt
	     
        in this directory. 
       

Known Bugs
	LILO is known to cut off commandlines which are too long. For the
	lilo-0.21 distribution, a commandline may not exceed 78 characters,
	while actually, 255 would be possible [on i386, kernel 2.2.14].
	LILO does _not_ report too-long commandlines, but the error will
	show up as either a panic at boot time, stating
		panic: BadMEM page in initial area
	or the dmesg line starting with Memory: will mention an unpredicted
	number of kilobytes. (Note that the latter number only includes
	errors in accessed memory.)

Future Possibilities
	It would be possible to use even more of the faulty RAMs by employing
	them for slabs. The smaller allocation granularity of slabs makes it
	possible to throw out just, say, 32 bytes surrounding an error. This
	would mean that the example DIMM only looses 16kB instead of 2MB.
	It might even be possible to allocate the slabs in such a way that,
	where possible, the remaining bytes in a slab structure are allocated
	around the error, reducing the RAM loss to 0 in the optimal situation!

	However, this yield is somewhat faked: It is possible to provide 512
	pages of 32-byte slabs, but it is not certain that anyone would use
	that many 32-byte slabs at any time.

	A better solution might be to alter the page allocation for a slab to
	have a preference for BadMEM pages, and given those a special treatment.
	This way, the BadRAM would be spread over all the slabs, which seems
	more likely to be a `true' pay-off. This would yield more overhead at
	slab allocation time, but on the other hand, by the nature of slabs,
	such allocations are made as rare as possible, so it might not matter
	that much. I am uncertain where to go.

Origin
	The BadRAM project is an idea and implementation by
		Rick van Rein
		Binnenes 67
		9407 CX Assen
		The Netherlands
		vanrein@cs.utwente.nl
		http://home.zonnet.nl/vanrein/badram
	
	This patch uses his work as its basics. Patch migration to the 2.4.x
        series, the proc fs support, MODSYSTEM, memmap support and much more
        has been added by Nico Schmoigl, <nico@writemail.com> leading to the
        new BadMEM patch. Its Homepage is viewable at

                http://badmem.sourceforge.net
 
                                Have fun with it!
				Nico
