Minutes: OTIS-Review 05.06.2001 Start: 14:00 Uhr End: 18:30 Uhr Present: Karl-Tasso Knoepfle, Harald Deppe, Martin Feuerstack-Raible, Andre Srowig, Uwe Stange, Ulrich Trunk, Ulrich Uwer, Dirk Wiedner. Agenda: 1) LHCb OT/FE (Martin Feuerstack-Raible) 2) Status DLL (Harald Deppe) 3) Status Memory (Andre Srowig) 4) Status Control Logic (Uwe Stange) 5) Time Schedule (Martin Feuerstack-Raible) 1) LHCb OT/FE ------------- 1.1 Two possible readout chains * HPTDC (J. Christiansen): radiation tolerant design; readout scheme not LHCb conformable (data concentrators required); works data driven, what possibly introduces deadtimes at high occupancies. * OTIS: radiation hard design; LHCb conformable readout scheme; works clock driven what introduces no deadtimes. Discussion: is there any effort going into the HPTDC solution? 1.2 Motivation for OTIS The HPTDC solution is not suited for magnetstations; the OTIS layout will be complete radhard; LHCb conformable readout scheme; OTIS will be placed next to the discriminators, what results in less cabling effort. 1.3 TDC architecture Scheme; DACs for ASDblr/q; testpulse insertion; slow-control 1.4 OTIS features ASDblr/q compatible; the two ASDblr comparator thresholds can be used for pulse height measurements; at ASDq this information is coded into the length of the comparator signal, which does not fit to the actual DLL/hitregister/decoder architecture. 1.5 The big picture Complete scheme: modules with 4*128 straws, 4 PCBs with 4 ASDblr and one OTIS each, high voltage, external cabling, signal distri- butor board, GOL, patch panel, counting room. The distance to the counting room is approximately 70m. Discussion: each group develops their own level 1 buffer, there will be no LHCb wide solution. 1.6 Data collection Star architecture: 4 OTIS chips share one GOL, i.e. 4*8Bit to 1*32Bit; max. net GOL data rate: 1.2GBit/sec, i.e. each OTIS has at max. 300GBit/sec at its disposal. 1.7 OTIS data format TDC-ID (16Bit may be shortened); BCC (16Bit); hit mask (32Bit); drift times. Necessary data rates are listed in LHCbNote2000-15. 1.8 GOL Successful test of a transmission network (GOL + commercial components) by M. Menouni. Error rate is negligible up to 2GBit/sec. Biggest problem: Clock Jitter of TTCrx is too big. Question: Upon which practical knowledge or publications does the OTIS project rely? DLL: technical literature. SRAM: Cell design is state of the art. Nevertheless one could not use existing designs because of considerations concerning power consumption and the unusual dimension (a data word is 240bit wide). Comparator signal pad: trivial. Remaining parts: OTIS specific. It is suggested that for future work one should take into account existing knowledge (e.g. DTMROC(?)) 2) Status DLL ------------- 2.1 DLL scheme Variable delay elements; charge pump/loop filter; phase detector. 2.2 Differences between 1st and 2nd submission * Switched clock- and data pins of hit registers (in the first case the (asynchronous) hit signal scans the DLL status; in the second case the different (clock synchronous) DLL signals scan the hit signal. Thereby the system works in the LHC clock domain right from the start. * Signal tap after each inverter (doubled number of time bins while the number of delay elements stayed the same) * Clock and hit signal now differentially routed. * New design kit; new extraction rules. 2.3 Measurements (1. Submission) * Lock time: 2us (rather fast). * Lock gets lost accidentally (problem not yet understood). For future DLL versions it is planned to observe the lock status and to reset the DLL if necessary. Suggestion: transmit lock status within every data set. * Lock range: 22-44MHz (at T=20C). Expectations were 30-50MHz (problem understood: displacement due to incorrect extraction rules). Question: How fast does the DLL get readjusted? A: Every 25ns. Question: How to recover any disturbance at a frequency of 40MHz? A: Intrinsic problem. DLL needs very stable power supply. Verification within the second submission: there a single chip contains the DLL and the SRAM (which runs at a frequency of 40MHz). * Differential non-linearity 1.9 time bins at 40MHz and 0.6 time bins at 30MHz. Expectations for the 2nd submission (correct extraction rules) are 0.6 time bins at 40Mhz what would satisfy specifications. * Main contributors to DNL are - non-variable dummy delay element (fixed), - phase detector switching noise, - clock and hit crosstalk (expected to get smaller due to differential clock routing). 3) Status SRAM -------------- 3.1 Characteristics Pipeline and derandomizing buffer are implemented as SRAM. Dimension: 164 * 240 bits (1.9mm * 2.9mm); one cell: 7um * 10um; Dual ported; radiation hard layout, runs at 40Mhz (1.1GByte/sec); Read access time: 13ns. Data out valid: cycle time - 2ns or ~23ns. Min. data in valid: 8ns. Coordinators which generate all necessary control signals. 3.2 Simulation Due to hardware restrictions the memory array was reduced to a minimum number of cells; the missing cells were replaced by a RC network. Simulations show that the RAM consumes approx. 100-150mA (for about 1-2ns) each cycle. Simulations show that the RAM will work up to a cycle time of 18ns. Presentation of a timing diagram for a read and write cycle. 3.3 Prototype Submitted Feb 2001; contains additional test structures (data generator, error checker, SEU counter, address generators). Suggestion: modify these test structures to build an internal memory self test. 3.4 Test PCB PCB already equiped; waiting for chip delivery. 4) Status pipeline control logic -------------------------------- 4.1 Programming work in progress. Control logic behaves the same for for consecutive readout (max. 3 data sets) and readout with hit scanner algorithm (max. 3 data sets). Difference between write pointer and trigger pointer is a function of the latency and the number of data sets to read out per trigger. For consecutive triggers it is possible that one data set belongs to different triggers (up to 3) but the data set gets copied to the derandomizing buffer only once. Therefore such data sets get marked. Suggestion: Transmit such data set only once to the L1 buffer. 4.2 Testenvironment Behavioural model of SRAM to take count for setup and hold times. "Off line" calculation of random hits/drift times (with variable ASDblr deadtime, maximum drift time and occupancy). Hits for neighbored channels are not (yet) correlated. 4.3 Next steps Finish programming/simulation of pipeline control logic. Implementation and test on a FPGA. Start programming of sparsification and read out logic. 5) Miscellaneous ---------------- 5.1 Time schedule Oct 2000: 1st DLL Feb 2001: 2nd DLL, SRAM Jul 2001: start tests with GOL Summer 2001: tests with DLL and SRAM Late autumn 2001/spring 2002: 1st prototyp complete TDC Summer 2002: 2nd version of TDC Spring 2003: begin mass production 5.2 Suggestion: Try to join Beetle and OTIS project for engineering run and mass production. Seems not achievable: time schedules differ too much. (07.06.2001 Uwe Stange)