Hi Pete, and Or, and Robin! Thank you all for taking the issue so seriously. > Hence, I was ending up at FMR. But other things use FMR in the > kernel too, like SRP, and we don't know of problems there. SRP uses > the cache feature of the FMR pool, while iSER does not. This seems > like less complexity to worry about for iSER. > Just for the record. SRP runs without problems on our setup. Should we try to disable the caching in SRP? If yes - how do we do that? >>> My guess is that the AMD hyper-transport may interfere with the fmr. >>> But I am no linux memory management specialist .. so please correct me >>> if I am wrong. Maybe the following happens: Bootet with one CPU all >>> FMR request goes to the 16GB RAM this single CPU directly addresses >>> via its memory controller. In case of more than one active CPU the >>> memory is fetched from both CPUs memory controllers with preference >>> to local memory. In seldom cases the memory manager fetchs memory for >>> the FMR process running on CPU0 from the CPU1 via the hyper-transport >>> channel and something weird happens. >>> >> To make sure we are on the same page (...) here: FMR (Fast Memory >> Registration) is a means to register with the HCA a (say) arbitrary list >> of pages to be used for an I/O. This page SC (scatter-gather) list was >> allocated and provided by the SCSI midlayer to the iSER SCSI LLD >> (low-level-driver) through the queuecommand interface. So I read your >> comment as saying that when using one CPU and or a system with one >> memory controller all I/O are served with pages from the "same memory" >> where when this doesn't happen, something gets broken. >> Where may I get a good source of information about how all these things work together? >> I wasn't sure to follow on the sentence "In seldom cases the memory >> manager fetchs memory for the FMR process running on CPU0 from the CPU1 >> via the hyper-transport channel and something weird happens" - can you >> explain a bit what you were referring to? >> This was only wild guessing around. I am scratching at the surface of the infiniband technology. So I may not be of great help at the actual discussion level. > It could all just be timing issues. Robin could generate the > problem at will on particularly hefty SGI boxes. He also noticed > that multiple readers would generate the problem more reliably, but > it was also possible with a single reader. I never could manage to > see a problem on little 2-socket AMD 4 GB boxes. A flaw in the PCIE > IO controllers talking via HT to remote memory seems unlikely. > Since the error can be generated by using more than one thread on a single core on a system with only one physical CPU HT ist ruled out as primary source of the error. Right? The erratic nature of the errors does not indocate for simply a deterministic error like a buffer overflow. But a timing problem is hard to track down. I have no idea how to procede further wth the testing. For the record a divergent finding I got per personal communication: Sebastian Schmitzdorff from hamburgnet.de has a OFED 1.4 centOS setup running on multi-Socket AMD64 that did NOT show the read errors. His system configuration: OFED-1.4-20090217-0600 CentOS 5.2 x86_64 latest updates installed. Kernel: 2.6.18-92.1.22.el5 iSER Target und OpenISCSI directly from Ofed. The target he used is 20080828. Please feed me with fresh testing setups to narrow the issue further. We may also provide direct access to our testing machines if this is of help. Cheers, Volker -- ==================================================== inqbus it-consulting +49 ( 341 ) 5643800 Dr. Volker Jaenisch http://www.inqbus.de Herloßsohnstr. 12 0 4 1 5 5 Leipzig N O T - F Ä L L E +49 ( 170 ) 3113748 ==================================================== -- To unsubscribe from this list: send the line "unsubscribe stgt" in the body of a message to majordomo at vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html |