Web   ·   Wiki   ·   Activities   ·   Blog   ·   Lists   ·   Chat   ·   Meeting   ·   Bugs   ·   Git   ·   Translate   ·   Archive   ·   People   ·   Donate
summaryrefslogtreecommitdiffstats
path: root/buildbot/docs/buildbot.info-1
blob: 5dcf91302f01d92847b0bc0e9b6931b07b31cd4f (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
3226
3227
3228
3229
3230
3231
3232
3233
3234
3235
3236
3237
3238
3239
3240
3241
3242
3243
3244
3245
3246
3247
3248
3249
3250
3251
3252
3253
3254
3255
3256
3257
3258
3259
3260
3261
3262
3263
3264
3265
3266
3267
3268
3269
3270
3271
3272
3273
3274
3275
3276
3277
3278
3279
3280
3281
3282
3283
3284
3285
3286
3287
3288
3289
3290
3291
3292
3293
3294
3295
3296
3297
3298
3299
3300
3301
3302
3303
3304
3305
3306
3307
3308
3309
3310
3311
3312
3313
3314
3315
3316
3317
3318
3319
3320
3321
3322
3323
3324
3325
3326
3327
3328
3329
3330
3331
3332
3333
3334
3335
3336
3337
3338
3339
3340
3341
3342
3343
3344
3345
3346
3347
3348
3349
3350
3351
3352
3353
3354
3355
3356
3357
3358
3359
3360
3361
3362
3363
3364
3365
3366
3367
3368
3369
3370
3371
3372
3373
3374
3375
3376
3377
3378
3379
3380
3381
3382
3383
3384
3385
3386
3387
3388
3389
3390
3391
3392
3393
3394
3395
3396
3397
3398
3399
3400
3401
3402
3403
3404
3405
3406
3407
3408
3409
3410
3411
3412
3413
3414
3415
3416
3417
3418
3419
3420
3421
3422
3423
3424
3425
3426
3427
3428
3429
3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
3459
3460
3461
3462
3463
3464
3465
3466
3467
3468
3469
3470
3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
3500
3501
3502
3503
3504
3505
3506
3507
3508
3509
3510
3511
3512
3513
3514
3515
3516
3517
3518
3519
3520
3521
3522
3523
3524
3525
3526
3527
3528
3529
3530
3531
3532
3533
3534
3535
3536
3537
3538
3539
3540
3541
3542
3543
3544
3545
3546
3547
3548
3549
3550
3551
3552
3553
3554
3555
3556
3557
3558
3559
3560
3561
3562
3563
3564
3565
3566
3567
3568
3569
3570
3571
3572
3573
3574
3575
3576
3577
3578
3579
3580
3581
3582
3583
3584
3585
3586
3587
3588
3589
3590
3591
3592
3593
3594
3595
3596
3597
3598
3599
3600
3601
3602
3603
3604
3605
3606
3607
3608
3609
3610
3611
3612
3613
3614
3615
3616
3617
3618
3619
3620
3621
3622
3623
3624
3625
3626
3627
3628
3629
3630
3631
3632
3633
3634
3635
3636
3637
3638
3639
3640
3641
3642
3643
3644
3645
3646
3647
3648
3649
3650
3651
3652
3653
3654
3655
3656
3657
3658
3659
3660
3661
3662
3663
3664
3665
3666
3667
3668
3669
3670
3671
3672
3673
3674
3675
3676
3677
3678
3679
3680
3681
3682
3683
3684
3685
3686
3687
3688
3689
3690
3691
3692
3693
3694
3695
3696
3697
3698
3699
3700
3701
3702
3703
3704
3705
3706
3707
3708
3709
3710
3711
3712
3713
3714
3715
3716
3717
3718
3719
3720
3721
3722
3723
3724
3725
3726
3727
3728
3729
3730
3731
3732
3733
3734
3735
3736
3737
3738
3739
3740
3741
3742
3743
3744
3745
3746
3747
3748
3749
3750
3751
3752
3753
3754
3755
3756
3757
3758
3759
3760
3761
3762
3763
3764
3765
3766
3767
3768
3769
3770
3771
3772
3773
3774
3775
3776
3777
3778
3779
3780
3781
3782
3783
3784
3785
3786
3787
3788
3789
3790
3791
3792
3793
3794
3795
3796
3797
3798
3799
3800
3801
3802
3803
3804
3805
3806
3807
3808
3809
3810
3811
3812
3813
3814
3815
3816
3817
3818
3819
3820
3821
3822
3823
3824
3825
3826
3827
3828
3829
3830
3831
3832
3833
3834
3835
3836
3837
3838
3839
3840
3841
3842
3843
3844
3845
3846
3847
3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
3865
3866
3867
3868
3869
3870
3871
3872
3873
3874
3875
3876
3877
3878
3879
3880
3881
3882
3883
3884
3885
3886
3887
3888
3889
3890
3891
3892
3893
3894
3895
3896
3897
3898
3899
3900
3901
3902
3903
3904
3905
3906
3907
3908
3909
3910
3911
3912
3913
3914
3915
3916
3917
3918
3919
3920
3921
3922
3923
3924
3925
3926
3927
3928
3929
3930
3931
3932
3933
3934
3935
3936
3937
3938
3939
3940
3941
3942
3943
3944
3945
3946
3947
3948
3949
3950
3951
3952
3953
3954
3955
3956
3957
3958
3959
3960
3961
3962
3963
3964
3965
3966
3967
3968
3969
3970
3971
3972
3973
3974
3975
3976
3977
3978
3979
3980
3981
3982
3983
3984
3985
3986
3987
3988
3989
3990
3991
3992
3993
3994
3995
3996
3997
3998
3999
4000
4001
4002
4003
4004
4005
4006
4007
4008
4009
4010
4011
4012
4013
4014
4015
4016
4017
4018
4019
4020
4021
4022
4023
4024
4025
4026
4027
4028
4029
4030
4031
4032
4033
4034
4035
4036
4037
4038
4039
4040
4041
4042
4043
4044
4045
4046
4047
4048
4049
4050
4051
4052
4053
4054
4055
4056
4057
4058
4059
4060
4061
4062
4063
4064
4065
4066
4067
4068
4069
4070
4071
4072
4073
4074
4075
4076
4077
4078
4079
4080
4081
4082
4083
4084
4085
4086
4087
4088
4089
4090
4091
4092
4093
4094
4095
4096
4097
4098
4099
4100
4101
4102
4103
4104
4105
4106
4107
4108
4109
4110
4111
4112
4113
4114
4115
4116
4117
4118
4119
4120
4121
4122
4123
4124
4125
4126
4127
4128
4129
4130
4131
4132
4133
4134
4135
4136
4137
4138
4139
4140
4141
4142
4143
4144
4145
4146
4147
4148
4149
4150
4151
4152
4153
4154
4155
4156
4157
4158
4159
4160
4161
4162
4163
4164
4165
4166
4167
4168
4169
4170
4171
4172
4173
4174
4175
4176
4177
4178
4179
4180
4181
4182
4183
4184
4185
4186
4187
4188
4189
4190
4191
4192
4193
4194
4195
4196
4197
4198
4199
4200
4201
4202
4203
4204
4205
4206
4207
4208
4209
4210
4211
4212
4213
4214
4215
4216
4217
4218
4219
4220
4221
4222
4223
4224
4225
4226
4227
4228
4229
4230
4231
4232
4233
4234
4235
4236
4237
4238
4239
4240
4241
4242
4243
4244
4245
4246
4247
4248
4249
4250
4251
4252
4253
4254
4255
4256
4257
4258
4259
4260
4261
4262
4263
4264
4265
4266
4267
4268
4269
4270
4271
4272
4273
4274
4275
4276
4277
4278
4279
4280
4281
4282
4283
4284
4285
4286
4287
4288
4289
4290
4291
4292
4293
4294
4295
4296
4297
4298
4299
4300
4301
4302
4303
4304
4305
4306
4307
4308
4309
4310
4311
4312
4313
4314
4315
4316
4317
4318
4319
4320
4321
4322
4323
4324
4325
4326
4327
4328
4329
4330
4331
4332
4333
4334
4335
4336
4337
4338
4339
4340
4341
4342
4343
4344
4345
4346
4347
4348
4349
4350
4351
4352
4353
4354
4355
4356
4357
4358
4359
4360
4361
4362
4363
4364
4365
4366
4367
4368
4369
4370
4371
4372
4373
4374
4375
4376
4377
4378
4379
4380
4381
4382
4383
4384
4385
4386
4387
4388
4389
4390
4391
4392
4393
4394
4395
4396
4397
4398
4399
4400
4401
4402
4403
4404
4405
4406
4407
4408
4409
4410
4411
4412
4413
4414
4415
4416
4417
4418
4419
4420
4421
4422
4423
4424
4425
4426
4427
4428
4429
4430
4431
4432
4433
4434
4435
4436
4437
4438
4439
4440
4441
4442
4443
4444
4445
4446
4447
4448
4449
4450
4451
4452
4453
4454
4455
4456
4457
4458
4459
4460
4461
4462
4463
4464
4465
4466
4467
4468
4469
4470
4471
4472
4473
4474
4475
4476
4477
4478
4479
4480
4481
4482
4483
4484
4485
4486
4487
4488
4489
4490
4491
4492
4493
4494
4495
4496
4497
4498
4499
4500
4501
4502
4503
4504
4505
4506
4507
4508
4509
4510
4511
4512
4513
4514
4515
4516
4517
4518
4519
4520
4521
4522
4523
4524
4525
4526
4527
4528
4529
4530
4531
4532
4533
4534
4535
4536
4537
4538
4539
4540
4541
4542
4543
4544
4545
4546
4547
4548
4549
4550
4551
4552
4553
4554
4555
4556
4557
4558
4559
4560
4561
4562
4563
4564
4565
4566
4567
4568
4569
4570
4571
4572
4573
4574
4575
4576
4577
4578
4579
4580
4581
4582
4583
4584
4585
4586
4587
4588
4589
4590
4591
4592
4593
4594
4595
4596
4597
4598
4599
4600
4601
4602
4603
4604
4605
4606
4607
4608
4609
4610
4611
4612
4613
4614
4615
4616
4617
4618
4619
4620
4621
4622
4623
4624
4625
4626
4627
4628
4629
4630
4631
4632
4633
4634
4635
4636
4637
4638
4639
4640
4641
4642
4643
4644
4645
4646
4647
4648
4649
4650
4651
4652
4653
4654
4655
4656
4657
4658
4659
4660
4661
4662
4663
4664
4665
4666
4667
4668
4669
4670
4671
4672
4673
4674
4675
4676
4677
4678
4679
4680
4681
4682
4683
4684
4685
4686
4687
4688
4689
4690
4691
4692
4693
4694
4695
4696
4697
4698
4699
4700
4701
4702
4703
4704
4705
4706
4707
4708
4709
4710
4711
4712
4713
4714
4715
4716
4717
4718
4719
4720
4721
4722
4723
4724
4725
4726
4727
4728
4729
4730
4731
4732
4733
4734
4735
4736
4737
4738
4739
4740
4741
4742
4743
4744
4745
4746
4747
4748
4749
4750
4751
4752
4753
4754
4755
4756
4757
4758
4759
4760
4761
4762
4763
4764
4765
4766
4767
4768
4769
4770
4771
4772
4773
4774
4775
4776
4777
4778
4779
4780
4781
4782
4783
4784
4785
4786
4787
4788
4789
4790
4791
4792
4793
4794
4795
4796
4797
4798
4799
4800
4801
4802
4803
4804
4805
4806
4807
4808
4809
4810
4811
4812
4813
4814
4815
4816
4817
4818
4819
4820
4821
4822
4823
4824
4825
4826
4827
4828
4829
4830
4831
4832
4833
4834
4835
4836
4837
4838
4839
4840
4841
4842
4843
4844
4845
4846
4847
4848
4849
4850
4851
4852
4853
4854
4855
4856
4857
4858
4859
4860
4861
4862
4863
4864
4865
4866
4867
4868
4869
4870
4871
4872
4873
4874
4875
4876
4877
4878
4879
4880
4881
4882
4883
4884
4885
4886
4887
4888
4889
4890
4891
4892
4893
4894
4895
4896
4897
4898
4899
4900
4901
4902
4903
4904
4905
4906
4907
4908
4909
4910
4911
4912
4913
4914
4915
4916
4917
4918
4919
4920
4921
4922
4923
4924
4925
4926
4927
4928
4929
4930
4931
4932
4933
4934
4935
4936
4937
4938
4939
4940
4941
4942
4943
4944
4945
4946
4947
4948
4949
4950
4951
4952
4953
4954
4955
4956
4957
4958
4959
4960
4961
4962
4963
4964
4965
4966
4967
4968
4969
4970
4971
4972
4973
4974
4975
4976
4977
4978
4979
4980
4981
4982
4983
4984
4985
4986
4987
4988
4989
4990
4991
4992
4993
4994
4995
4996
4997
4998
4999
5000
5001
5002
5003
5004
5005
5006
5007
5008
5009
5010
5011
5012
5013
5014
5015
5016
5017
5018
5019
5020
5021
5022
5023
5024
5025
5026
5027
5028
5029
5030
5031
5032
5033
5034
5035
5036
5037
5038
5039
5040
5041
5042
5043
5044
5045
5046
5047
5048
5049
5050
5051
5052
5053
5054
5055
5056
5057
5058
5059
5060
5061
5062
5063
5064
5065
5066
5067
5068
5069
5070
5071
5072
5073
5074
5075
5076
5077
5078
5079
5080
5081
5082
5083
5084
5085
5086
5087
5088
5089
5090
5091
5092
5093
5094
5095
5096
5097
5098
5099
5100
5101
5102
5103
5104
5105
5106
5107
5108
5109
5110
5111
5112
5113
5114
5115
5116
5117
5118
5119
5120
5121
5122
5123
5124
5125
5126
5127
5128
5129
5130
5131
5132
5133
5134
5135
5136
5137
5138
5139
5140
5141
5142
5143
5144
5145
5146
5147
5148
5149
5150
5151
5152
5153
5154
5155
5156
5157
5158
5159
5160
5161
5162
5163
5164
5165
5166
5167
5168
5169
5170
5171
5172
5173
5174
5175
5176
5177
5178
5179
5180
5181
5182
5183
5184
5185
5186
5187
5188
5189
5190
5191
5192
5193
5194
5195
5196
5197
5198
5199
5200
5201
5202
5203
5204
5205
5206
5207
5208
5209
5210
5211
5212
5213
5214
5215
5216
5217
5218
5219
5220
5221
5222
5223
5224
5225
5226
5227
5228
5229
5230
5231
5232
5233
5234
5235
5236
5237
5238
5239
5240
5241
5242
5243
5244
5245
5246
5247
5248
5249
5250
5251
5252
5253
5254
5255
5256
5257
5258
5259
5260
5261
5262
5263
5264
5265
5266
5267
5268
5269
5270
5271
5272
5273
5274
5275
5276
5277
5278
5279
5280
5281
5282
5283
5284
5285
5286
5287
5288
5289
5290
5291
5292
5293
5294
5295
5296
5297
5298
5299
5300
5301
5302
5303
5304
5305
5306
5307
5308
5309
5310
5311
5312
5313
5314
5315
5316
5317
5318
5319
5320
5321
5322
5323
5324
5325
5326
5327
5328
5329
5330
5331
5332
5333
5334
5335
5336
5337
5338
5339
5340
5341
5342
5343
5344
5345
5346
5347
5348
5349
5350
5351
5352
5353
5354
5355
5356
5357
5358
5359
5360
5361
5362
5363
5364
5365
5366
5367
5368
5369
5370
5371
5372
5373
5374
5375
5376
5377
5378
5379
5380
5381
5382
5383
5384
5385
5386
5387
5388
5389
5390
5391
5392
5393
5394
5395
5396
5397
5398
5399
5400
5401
5402
5403
5404
5405
5406
5407
5408
5409
5410
5411
5412
5413
5414
5415
5416
5417
5418
5419
5420
5421
5422
5423
5424
5425
5426
5427
5428
5429
5430
5431
5432
5433
5434
5435
5436
5437
5438
5439
5440
5441
5442
5443
5444
5445
5446
5447
5448
5449
5450
5451
5452
5453
5454
5455
5456
5457
5458
5459
5460
5461
5462
5463
5464
5465
5466
5467
5468
5469
5470
5471
5472
5473
5474
5475
5476
5477
5478
5479
5480
5481
5482
5483
5484
5485
5486
5487
5488
5489
5490
5491
5492
5493
5494
5495
5496
5497
5498
5499
5500
5501
5502
5503
5504
5505
5506
5507
5508
5509
5510
5511
5512
5513
5514
5515
5516
5517
5518
5519
5520
5521
5522
5523
5524
5525
5526
5527
5528
5529
5530
5531
5532
5533
5534
5535
5536
5537
5538
5539
5540
5541
5542
5543
5544
5545
5546
5547
5548
5549
5550
5551
5552
5553
5554
5555
5556
5557
5558
5559
5560
5561
5562
5563
5564
5565
5566
5567
5568
5569
5570
5571
5572
5573
5574
5575
5576
5577
5578
5579
5580
5581
5582
5583
5584
5585
5586
5587
5588
5589
5590
5591
5592
5593
5594
5595
5596
5597
5598
5599
5600
5601
5602
5603
5604
5605
5606
5607
5608
5609
5610
5611
5612
5613
5614
5615
5616
5617
5618
5619
5620
5621
5622
5623
5624
5625
5626
5627
5628
5629
5630
5631
5632
5633
5634
5635
5636
5637
5638
5639
5640
5641
5642
5643
5644
5645
5646
5647
5648
5649
5650
5651
5652
5653
5654
5655
5656
5657
5658
5659
5660
5661
5662
5663
5664
5665
5666
5667
5668
5669
5670
5671
5672
5673
5674
5675
5676
5677
5678
5679
5680
5681
5682
5683
5684
5685
5686
5687
5688
5689
5690
5691
5692
5693
5694
5695
5696
5697
5698
5699
5700
5701
5702
5703
5704
5705
5706
5707
5708
5709
5710
5711
5712
5713
5714
5715
5716
5717
5718
5719
5720
5721
5722
5723
5724
5725
5726
5727
5728
5729
5730
5731
5732
5733
5734
5735
5736
5737
5738
5739
5740
5741
5742
5743
5744
5745
5746
5747
5748
5749
5750
5751
5752
5753
5754
5755
5756
5757
5758
5759
5760
5761
5762
5763
5764
5765
5766
5767
5768
5769
5770
5771
5772
5773
5774
5775
5776
5777
5778
5779
5780
5781
5782
5783
5784
5785
5786
5787
5788
5789
5790
5791
5792
5793
5794
5795
5796
5797
5798
5799
5800
5801
5802
5803
5804
5805
5806
5807
5808
5809
5810
5811
5812
5813
5814
5815
5816
5817
5818
5819
5820
5821
5822
5823
5824
5825
5826
5827
5828
5829
5830
5831
5832
5833
5834
5835
5836
5837
5838
5839
5840
5841
5842
5843
5844
5845
5846
5847
5848
5849
5850
5851
5852
5853
5854
5855
5856
5857
5858
5859
5860
5861
5862
5863
5864
5865
5866
5867
5868
5869
5870
5871
5872
5873
5874
5875
5876
5877
5878
5879
5880
5881
5882
5883
5884
5885
5886
5887
5888
5889
5890
5891
5892
5893
5894
5895
5896
5897
5898
5899
5900
5901
5902
5903
5904
5905
5906
5907
5908
5909
5910
5911
5912
5913
5914
5915
5916
5917
5918
5919
5920
5921
5922
5923
5924
5925
5926
5927
5928
5929
5930
5931
5932
5933
5934
5935
5936
5937
5938
5939
5940
5941
5942
5943
5944
5945
5946
5947
5948
5949
5950
5951
5952
5953
5954
5955
5956
5957
5958
5959
5960
5961
5962
5963
5964
5965
5966
5967
5968
5969
5970
5971
5972
5973
5974
5975
5976
5977
5978
5979
5980
5981
5982
5983
5984
5985
5986
5987
5988
5989
5990
5991
5992
5993
5994
5995
5996
5997
5998
5999
6000
6001
6002
6003
6004
6005
6006
6007
6008
6009
6010
6011
6012
6013
6014
6015
6016
6017
6018
6019
6020
6021
6022
6023
6024
6025
6026
6027
6028
6029
6030
6031
6032
6033
6034
6035
6036
6037
6038
6039
6040
6041
6042
6043
6044
6045
6046
6047
6048
6049
6050
6051
6052
6053
6054
6055
6056
6057
6058
6059
6060
6061
6062
6063
6064
6065
6066
6067
6068
6069
6070
6071
6072
6073
6074
6075
6076
6077
6078
6079
6080
6081
6082
6083
6084
6085
6086
6087
6088
6089
6090
6091
6092
6093
6094
6095
6096
6097
6098
6099
6100
6101
6102
6103
6104
6105
6106
6107
6108
6109
6110
6111
6112
6113
6114
6115
6116
6117
6118
6119
6120
6121
6122
6123
6124
6125
6126
6127
6128
6129
6130
6131
6132
6133
6134
6135
6136
6137
6138
6139
6140
6141
6142
6143
6144
6145
6146
6147
6148
6149
6150
6151
6152
6153
6154
6155
6156
6157
6158
6159
6160
6161
6162
6163
6164
6165
6166
6167
6168
6169
6170
6171
6172
6173
6174
6175
6176
6177
6178
6179
6180
6181
6182
6183
6184
6185
6186
6187
6188
6189
6190
6191
6192
6193
6194
6195
6196
6197
6198
6199
6200
6201
6202
6203
6204
6205
6206
6207
6208
6209
6210
6211
6212
6213
6214
6215
6216
6217
6218
6219
6220
6221
6222
6223
6224
6225
6226
6227
6228
6229
6230
6231
6232
6233
6234
6235
6236
6237
6238
6239
6240
6241
6242
6243
6244
6245
6246
6247
6248
6249
6250
6251
6252
6253
6254
6255
6256
6257
6258
6259
6260
6261
6262
6263
6264
6265
6266
6267
6268
6269
6270
6271
6272
6273
6274
6275
6276
6277
6278
6279
6280
6281
6282
6283
6284
6285
6286
6287
6288
6289
6290
6291
6292
6293
6294
6295
6296
6297
6298
6299
6300
6301
6302
6303
6304
6305
6306
6307
6308
6309
6310
6311
6312
6313
6314
6315
6316
6317
6318
6319
6320
6321
6322
6323
6324
6325
6326
6327
6328
6329
6330
6331
6332
6333
6334
6335
6336
6337
6338
6339
6340
6341
6342
6343
6344
6345
6346
6347
6348
6349
6350
6351
6352
6353
6354
6355
6356
6357
6358
6359
6360
6361
6362
6363
6364
6365
6366
6367
6368
6369
6370
6371
6372
6373
6374
6375
6376
6377
6378
6379
6380
6381
6382
6383
6384
6385
6386
6387
6388
6389
6390
6391
6392
6393
6394
6395
6396
6397
6398
6399
6400
6401
6402
6403
6404
6405
6406
6407
6408
6409
6410
6411
6412
6413
6414
6415
6416
6417
6418
6419
6420
6421
6422
6423
6424
6425
6426
6427
6428
6429
6430
6431
6432
6433
6434
6435
6436
6437
6438
6439
6440
6441
6442
6443
6444
6445
6446
6447
6448
6449
6450
6451
6452
6453
6454
6455
6456
6457
6458
6459
6460
6461
6462
6463
6464
6465
6466
6467
6468
6469
6470
6471
6472
6473
6474
6475
6476
6477
6478
6479
6480
6481
6482
6483
6484
6485
6486
6487
6488
6489
6490
6491
6492
6493
6494
6495
6496
6497
6498
6499
6500
6501
6502
6503
6504
6505
6506
6507
6508
6509
6510
6511
6512
6513
6514
6515
6516
6517
6518
6519
6520
6521
6522
6523
6524
6525
6526
6527
6528
6529
6530
6531
6532
6533
6534
6535
6536
6537
6538
6539
6540
6541
6542
6543
6544
6545
6546
6547
6548
6549
6550
6551
6552
6553
6554
6555
6556
6557
6558
6559
6560
6561
6562
6563
6564
6565
6566
6567
6568
6569
6570
6571
6572
6573
6574
6575
6576
6577
6578
6579
6580
6581
6582
6583
6584
6585
6586
6587
6588
6589
6590
6591
6592
6593
6594
6595
6596
6597
6598
6599
6600
6601
6602
6603
6604
6605
6606
6607
6608
6609
6610
6611
6612
6613
6614
6615
6616
6617
6618
6619
6620
6621
6622
6623
6624
6625
6626
6627
6628
6629
6630
6631
6632
6633
6634
6635
6636
6637
6638
6639
6640
6641
6642
6643
6644
6645
6646
6647
6648
6649
6650
6651
6652
6653
6654
6655
6656
6657
6658
6659
6660
6661
6662
6663
6664
6665
6666
6667
6668
6669
6670
6671
6672
6673
6674
6675
6676
6677
6678
6679
6680
6681
6682
6683
6684
6685
6686
6687
6688
6689
6690
6691
6692
6693
6694
6695
6696
6697
6698
6699
6700
6701
6702
6703
6704
6705
6706
6707
6708
6709
6710
6711
6712
6713
6714
6715
6716
6717
6718
6719
6720
6721
6722
6723
6724
6725
6726
6727
6728
6729
6730
6731
6732
6733
6734
6735
6736
6737
6738
6739
6740
6741
6742
6743
6744
6745
6746
6747
6748
6749
6750
6751
6752
6753
6754
6755
6756
6757
6758
6759
6760
6761
6762
6763
6764
6765
6766
6767
6768
6769
6770
6771
6772
6773
6774
6775
6776
6777
6778
6779
6780
6781
6782
6783
6784
6785
6786
6787
6788
6789
6790
6791
6792
6793
6794
6795
6796
6797
6798
6799
6800
6801
6802
6803
6804
6805
6806
6807
6808
6809
6810
6811
6812
6813
6814
6815
6816
6817
6818
6819
6820
6821
6822
6823
6824
6825
6826
6827
6828
6829
6830
6831
6832
6833
6834
6835
6836
6837
6838
6839
6840
6841
6842
6843
6844
6845
6846
6847
6848
6849
6850
6851
6852
6853
6854
6855
6856
6857
6858
6859
6860
6861
6862
6863
6864
6865
6866
6867
6868
6869
6870
6871
6872
6873
6874
6875
6876
6877
6878
6879
6880
6881
6882
6883
6884
6885
6886
6887
6888
6889
6890
6891
6892
6893
6894
6895
6896
6897
6898
6899
6900
6901
6902
6903
6904
6905
6906
6907
6908
6909
6910
6911
6912
6913
6914
6915
6916
6917
6918
6919
6920
6921
6922
6923
6924
6925
6926
6927
6928
6929
6930
6931
6932
6933
6934
6935
6936
6937
6938
6939
6940
6941
6942
6943
6944
6945
6946
6947
6948
6949
6950
6951
6952
6953
6954
6955
6956
6957
6958
6959
6960
6961
6962
6963
6964
6965
6966
6967
6968
6969
6970
6971
6972
6973
6974
6975
6976
6977
6978
6979
6980
6981
6982
6983
6984
6985
6986
6987
6988
6989
6990
6991
6992
6993
6994
6995
6996
6997
6998
6999
7000
7001
7002
7003
7004
7005
7006
7007
7008
7009
7010
7011
7012
7013
7014
7015
7016
7017
7018
7019
7020
7021
7022
7023
7024
7025
7026
7027
7028
7029
7030
7031
7032
7033
7034
7035
7036
7037
7038
7039
7040
7041
7042
7043
7044
7045
7046
7047
7048
7049
7050
7051
7052
7053
7054
7055
7056
7057
7058
7059
7060
7061
7062
7063
7064
7065
7066
7067
7068
7069
7070
7071
7072
7073
7074
7075
7076
7077
7078
7079
7080
7081
7082
7083
7084
7085
7086
7087
7088
7089
7090
7091
7092
7093
7094
7095
7096
7097
7098
7099
7100
7101
7102
7103
7104
7105
7106
7107
7108
7109
7110
7111
7112
7113
7114
7115
7116
7117
7118
7119
7120
7121
7122
7123
7124
7125
7126
7127
7128
7129
7130
7131
7132
7133
7134
7135
7136
7137
7138
7139
7140
7141
7142
7143
7144
7145
7146
7147
7148
7149
7150
7151
7152
7153
7154
7155
7156
7157
7158
7159
7160
7161
7162
7163
7164
7165
7166
7167
7168
7169
7170
7171
7172
7173
7174
7175
7176
7177
7178
7179
7180
7181
7182
7183
7184
7185
7186
7187
7188
7189
7190
7191
7192
7193
7194
7195
7196
7197
7198
7199
7200
7201
7202
7203
7204
7205
7206
7207
7208
7209
7210
7211
7212
7213
7214
7215
7216
7217
7218
7219
7220
7221
7222
7223
7224
7225
7226
7227
7228
7229
7230
7231
7232
7233
7234
7235
7236
7237
7238
7239
7240
7241
7242
7243
7244
7245
7246
7247
7248
7249
7250
7251
7252
7253
7254
7255
7256
7257
7258
7259
7260
7261
7262
7263
7264
7265
7266
7267
7268
7269
7270
7271
7272
7273
7274
7275
7276
7277
7278
This is buildbot.info, produced by makeinfo version 4.11 from
buildbot.texinfo.

This is the BuildBot manual.

   Copyright (C) 2005,2006 Brian Warner

   Copying and distribution of this file, with or without
modification, are permitted in any medium without royalty provided
the copyright notice and this notice are preserved.


File: buildbot.info,  Node: Top,  Next: Introduction,  Prev: (dir),  Up: (dir)

BuildBot
********

This is the BuildBot manual.

   Copyright (C) 2005,2006 Brian Warner

   Copying and distribution of this file, with or without
modification, are permitted in any medium without royalty provided
the copyright notice and this notice are preserved.

* Menu:

* Introduction::                What the BuildBot does.
* Installation::                Creating a buildmaster and buildslaves,
                                running them.
* Concepts::                    What goes on in the buildbot's little mind.
* Configuration::               Controlling the buildbot.
* Getting Source Code Changes::  Discovering when to run a build.
* Build Process::               Controlling how each build is run.
* Status Delivery::             Telling the world about the build's results.
* Command-line tool::
* Resources::                   Getting help.
* Developer's Appendix::
* Index of Useful Classes::
* Index of master.cfg keys::
* Index::                       Complete index.

 --- The Detailed Node Listing ---

Introduction

* History and Philosophy::
* System Architecture::
* Control Flow::

System Architecture

* BuildSlave Connections::
* Buildmaster Architecture::
* Status Delivery Architecture::

Installation

* Requirements::
* Installing the code::
* Creating a buildmaster::
* Upgrading an Existing Buildmaster::
* Creating a buildslave::
* Launching the daemons::
* Logfiles::
* Shutdown::
* Maintenance::
* Troubleshooting::

Creating a buildslave

* Buildslave Options::

Troubleshooting

* Starting the buildslave::
* Connecting to the buildmaster::
* Forcing Builds::

Concepts

* Version Control Systems::
* Schedulers::
* BuildSet::
* BuildRequest::
* Builder::
* Users::
* Build Properties::

Version Control Systems

* Generalizing VC Systems::
* Source Tree Specifications::
* How Different VC Systems Specify Sources::
* Attributes of Changes::

Users

* Doing Things With Users::
* Email Addresses::
* IRC Nicknames::
* Live Status Clients::

Configuration

* Config File Format::
* Loading the Config File::
* Testing the Config File::
* Defining the Project::
* Change Sources and Schedulers::
* Setting the slaveport::
* Buildslave Specifiers::
* On-Demand ("Latent") Buildslaves::
* Defining Global Properties::
* Defining Builders::
* Defining Status Targets::
* Debug options::

Change Sources and Schedulers

* Scheduler Scheduler::
* AnyBranchScheduler::
* Dependent Scheduler::
* Periodic Scheduler::
* Nightly Scheduler::
* Try Schedulers::
* Triggerable Scheduler::

Buildslave Specifiers
* When Buildslaves Go Missing::

On-Demand ("Latent") Buildslaves
* Amazon Web Services Elastic Compute Cloud ("AWS EC2")::
* Dangers with Latent Buildslaves::
* Writing New Latent Buildslaves::

Getting Source Code Changes

* Change Sources::
* Choosing ChangeSources::
* CVSToys - PBService::
* Mail-parsing ChangeSources::
* PBChangeSource::
* P4Source::
* BonsaiPoller::
* SVNPoller::
* MercurialHook::
* Bzr Hook::
* Bzr Poller::

Mail-parsing ChangeSources

* Subscribing the Buildmaster::
* Using Maildirs::
* Parsing Email Change Messages::

Parsing Email Change Messages

* FCMaildirSource::
* SyncmailMaildirSource::
* BonsaiMaildirSource::
* SVNCommitEmailMaildirSource::

Build Process

* Build Steps::
* Interlocks::
* Build Factories::

Build Steps

* Common Parameters::
* Using Build Properties::
* Source Checkout::
* ShellCommand::
* Simple ShellCommand Subclasses::
* Python BuildSteps::
* Transferring Files::
* Steps That Run on the Master::
* Triggering Schedulers::
* Writing New BuildSteps::

Source Checkout

* CVS::
* SVN::
* Darcs::
* Mercurial::
* Arch::
* Bazaar::
* Bzr::
* P4::
* Git::

Simple ShellCommand Subclasses

* Configure::
* Compile::
* Test::
* TreeSize::
* PerlModuleTest::
* SetProperty::

Python BuildSteps

* BuildEPYDoc::
* PyFlakes::
* PyLint::

Writing New BuildSteps

* BuildStep LogFiles::
* Reading Logfiles::
* Adding LogObservers::
* BuildStep URLs::

Build Factories

* BuildStep Objects::
* BuildFactory::
* Process-Specific build factories::

BuildStep Objects

* BuildFactory Attributes::
* Quick builds::

BuildFactory

* BuildFactory Attributes::
* Quick builds::

Process-Specific build factories

* GNUAutoconf::
* CPAN::
* Python distutils::
* Python/Twisted/trial projects::

Status Delivery

* WebStatus::
* MailNotifier::
* IRC Bot::
* PBListener::
* Writing New Status Plugins::

WebStatus

* WebStatus Configuration Parameters::
* Buildbot Web Resources::
* XMLRPC server::
* HTML Waterfall::

Command-line tool

* Administrator Tools::
* Developer Tools::
* Other Tools::
* .buildbot config directory::

Developer Tools

* statuslog::
* statusgui::
* try::

waiting for results

* try --diff::

Other Tools

* sendchange::
* debugclient::


File: buildbot.info,  Node: Introduction,  Next: Installation,  Prev: Top,  Up: Top

1 Introduction
**************

The BuildBot is a system to automate the compile/test cycle required
by most software projects to validate code changes. By automatically
rebuilding and testing the tree each time something has changed,
build problems are pinpointed quickly, before other developers are
inconvenienced by the failure. The guilty developer can be identified
and harassed without human intervention. By running the builds on a
variety of platforms, developers who do not have the facilities to
test their changes everywhere before checkin will at least know
shortly afterwards whether they have broken the build or not. Warning
counts, lint checks, image size, compile time, and other build
parameters can be tracked over time, are more visible, and are
therefore easier to improve.

   The overall goal is to reduce tree breakage and provide a platform
to run tests or code-quality checks that are too annoying or pedantic
for any human to waste their time with. Developers get immediate (and
potentially public) feedback about their changes, encouraging them to
be more careful about testing before checkin.

   Features:

   * run builds on a variety of slave platforms

   * arbitrary build process: handles projects using C, Python,
     whatever

   * minimal host requirements: python and Twisted

   * slaves can be behind a firewall if they can still do checkout

   * status delivery through web page, email, IRC, other protocols

   * track builds in progress, provide estimated completion time

   * flexible configuration by subclassing generic build process
     classes

   * debug tools to force a new build, submit fake Changes, query
     slave status

   * released under the GPL

* Menu:

* History and Philosophy::
* System Architecture::
* Control Flow::


File: buildbot.info,  Node: History and Philosophy,  Next: System Architecture,  Prev: Introduction,  Up: Introduction

1.1 History and Philosophy
==========================

The Buildbot was inspired by a similar project built for a development
team writing a cross-platform embedded system. The various components
of the project were supposed to compile and run on several flavors of
unix (linux, solaris, BSD), but individual developers had their own
preferences and tended to stick to a single platform. From time to
time, incompatibilities would sneak in (some unix platforms want to
use `string.h', some prefer `strings.h'), and then the tree would
compile for some developers but not others. The buildbot was written
to automate the human process of walking into the office, updating a
tree, compiling (and discovering the breakage), finding the developer
at fault, and complaining to them about the problem they had
introduced. With multiple platforms it was difficult for developers to
do the right thing (compile their potential change on all platforms);
the buildbot offered a way to help.

   Another problem was when programmers would change the behavior of a
library without warning its users, or change internal aspects that
other code was (unfortunately) depending upon. Adding unit tests to
the codebase helps here: if an application's unit tests pass despite
changes in the libraries it uses, you can have more confidence that
the library changes haven't broken anything. Many developers
complained that the unit tests were inconvenient or took too long to
run: having the buildbot run them reduces the developer's workload to
a minimum.

   In general, having more visibility into the project is always good,
and automation makes it easier for developers to do the right thing.
When everyone can see the status of the project, developers are
encouraged to keep the tree in good working order. Unit tests that
aren't run on a regular basis tend to suffer from bitrot just like
code does: exercising them on a regular basis helps to keep them
functioning and useful.

   The current version of the Buildbot is additionally targeted at
distributed free-software projects, where resources and platforms are
only available when provided by interested volunteers. The buildslaves
are designed to require an absolute minimum of configuration, reducing
the effort a potential volunteer needs to expend to be able to
contribute a new test environment to the project. The goal is for
anyone who wishes that a given project would run on their favorite
platform should be able to offer that project a buildslave, running on
that platform, where they can verify that their portability code
works, and keeps working.


File: buildbot.info,  Node: System Architecture,  Next: Control Flow,  Prev: History and Philosophy,  Up: Introduction

1.2 System Architecture
=======================

The Buildbot consists of a single `buildmaster' and one or more
`buildslaves', connected in a star topology. The buildmaster makes
all decisions about what, when, and how to build. It sends commands
to be run on the build slaves, which simply execute the commands and
return the results. (certain steps involve more local decision
making, where the overhead of sending a lot of commands back and
forth would be inappropriate, but in general the buildmaster is
responsible for everything).

   The buildmaster is usually fed `Changes' by some sort of version
control system (*note Change Sources::), which may cause builds to be
run. As the builds are performed, various status messages are
produced, which are then sent to any registered Status Targets (*note
Status Delivery::).


                  +------------------+           +-----------+
                  |                  |---------->|  Browser  |
                  |   BuildMaster    |           +-----------+
        Changes   |                  |--------------->+--------+
     +----------->|                  | Build Status   | email  |
     |            |                  |------------+   +--------+
     |            |                  |-------+    |     +---------------+
     |            +------------------+       |    +---->| Status Client |
+----------+         | ^      | ^            |          +---------------+
| Change   |         | |     C| |            |             +-----+
|  Sources |         | |     o| |            +------------>| IRC |
|          |         | |     m| |R                         +-----+
| CVS      |         v |     m| |e
| SVN      |    +---------+  a| |s
| Darcs    |    |  Build  |  n| |u
| .. etc   |    |  Slave  |  d| |l
|          |    +---------+  s| |t
|          |                  v |s
+----------+                +---------+
                            |  Build  |
                            |  Slave  |
                            +---------+

   The buildmaster is configured and maintained by the "buildmaster
admin", who is generally the project team member responsible for
build process issues. Each buildslave is maintained by a "buildslave
admin", who do not need to be quite as involved. Generally slaves are
run by anyone who has an interest in seeing the project work well on
their favorite platform.

* Menu:

* BuildSlave Connections::
* Buildmaster Architecture::
* Status Delivery Architecture::


File: buildbot.info,  Node: BuildSlave Connections,  Next: Buildmaster Architecture,  Prev: System Architecture,  Up: System Architecture

1.2.1 BuildSlave Connections
----------------------------

The buildslaves are typically run on a variety of separate machines,
at least one per platform of interest. These machines connect to the
buildmaster over a TCP connection to a publically-visible port. As a
result, the buildslaves can live behind a NAT box or similar
firewalls, as long as they can get to buildmaster. The TCP connections
are initiated by the buildslave and accepted by the buildmaster, but
commands and results travel both ways within this connection. The
buildmaster is always in charge, so all commands travel exclusively
from the buildmaster to the buildslave.

   To perform builds, the buildslaves must typically obtain source
code from a CVS/SVN/etc repository. Therefore they must also be able
to reach the repository. The buildmaster provides instructions for
performing builds, but does not provide the source code itself.



Repository|  |       BuildMaster   |      |
 (CVS/SVN)|  |                    ^|^^^   |
          |  |                   / c   \  |
----------+  +------------------/--o----\-+
        ^                      /   m  ^  \
        |                     /    m  |   \
 checkout/update              --+  a  | +--
        |                    TCP|  n  | |TCP
        |                       |  d  | |
        |                       |  s  | |
        |                       |  |  | |
        |                       |  |  r |
        |                       |  |  e |
 -N-A-T-|- - - - -N-A-T- - - - -|- |- s-|- - - - -N-A-T- - -
        |                       |  |  u |
        |                       |  |  l |
        |    +------------------|--|--t-|-+
        |    |                  |  |  s | |
        +----|                     v  |   |
             |                        |   |
             |                        |   |
             |                            |
             |       BuildSlave           |
             +----------------------------+


File: buildbot.info,  Node: Buildmaster Architecture,  Next: Status Delivery Architecture,  Prev: BuildSlave Connections,  Up: System Architecture

1.2.2 Buildmaster Architecture
------------------------------

The Buildmaster consists of several pieces:



 +---------------+
 | Change Source |----->----+
 +---------------+          |
                         Changes
                            |
 +---------------+          v
 | Change Source |----->----+
 +---------------+          v
                      +-----+-------+
                      |             |
                      v             v
              +-----------+    +-----------+
              | Scheduler |    | Scheduler |
              +-----------+    +-----------+
                 |                |  |
          +------+---------+  +---+  +-----+
          |                |  |            |
          v                |  |          Build
      :      :           : v  v :        Request
      :      :           :      :          |
      : ---- :           :      :          |
      : ---- :           : ---- :          |
      +======+           +======+      :   v  :
         |                  |          :      :
         v                  v          :      :
   +---------+        +---------+      :queue :
   | Builder |        | Builder |      +======+
   +---------+        +---------+         |
                                          v
                                    +---------+
                                    | Builder |
                                    +---------+

   * Change Sources, which create a Change object each time something
     is modified in the VC repository. Most ChangeSources listen for
     messages from a hook script of some sort. Some sources actively
     poll the repository on a regular basis. All Changes are fed to
     the Schedulers.

   * Schedulers, which decide when builds should be performed. They
     collect Changes into BuildRequests, which are then queued for
     delivery to Builders until a buildslave is available.

   * Builders, which control exactly _how_ each build is performed
     (with a series of BuildSteps, configured in a BuildFactory). Each
     Build is run on a single buildslave.

   * Status plugins, which deliver information about the build results
     through protocols like HTTP, mail, and IRC.




                       +-----------------+
                       |  BuildSlave     |
                       |                 |
                       |                 |
 +-------+             | +------------+  |
 |Builder|----Build----->|SlaveBuilder|  |
 +-------+             | +------------+  |
                       |                 |
                       | +------------+  |
             +-Build---->|SlaveBuilder|  |
             |         | +------------+  |
 +-------+   |         |                 |
 |Builder|---+         +-----------------+
 +-------+   |
             |
             |      +-----------------+
           Build    |  BuildSlave     |
             |      |                 |
             |      |                 |
             |      | +------------+  |
             +------->|SlaveBuilder|  |
                    | +------------+  |
 +-------+          |                 |
 |Builder|--+       | +------------+  |
 +-------+  +-------->|SlaveBuilder|  |
                    | +------------+  |
                    |                 |
                    +-----------------+

   Each Builder is configured with a list of BuildSlaves that it will
use for its builds. These buildslaves are expected to behave
identically: the only reason to use multiple BuildSlaves for a single
Builder is to provide a measure of load-balancing.

   Within a single BuildSlave, each Builder creates its own
SlaveBuilder instance. These SlaveBuilders operate independently from
each other.  Each gets its own base directory to work in. It is quite
common to have many Builders sharing the same buildslave. For
example, there might be two buildslaves: one for i386, and a second
for PowerPC.  There may then be a pair of Builders that do a full
compile/test run, one for each architecture, and a lone Builder that
creates snapshot source tarballs if the full builders complete
successfully. The full builders would each run on a single
buildslave, whereas the tarball creation step might run on either
buildslave (since the platform doesn't matter when creating source
tarballs). In this case, the mapping would look like:

     Builder(full-i386)  ->  BuildSlaves(slave-i386)
     Builder(full-ppc)   ->  BuildSlaves(slave-ppc)
     Builder(source-tarball) -> BuildSlaves(slave-i386, slave-ppc)

   and each BuildSlave would have two SlaveBuilders inside it, one
for a full builder, and a second for the source-tarball builder.

   Once a SlaveBuilder is available, the Builder pulls one or more
BuildRequests off its incoming queue. (It may pull more than one if it
determines that it can merge the requests together; for example, there
may be multiple requests to build the current HEAD revision). These
requests are merged into a single Build instance, which includes the
SourceStamp that describes what exact version of the source code
should be used for the build. The Build is then randomly assigned to a
free SlaveBuilder and the build begins.

   The behaviour when BuildRequests are merged can be customized,
*note Merging BuildRequests::.


File: buildbot.info,  Node: Status Delivery Architecture,  Prev: Buildmaster Architecture,  Up: System Architecture

1.2.3 Status Delivery Architecture
----------------------------------

The buildmaster maintains a central Status object, to which various
status plugins are connected. Through this Status object, a full
hierarchy of build status objects can be obtained.



  Status Objects            Status Plugins       User Clients

 +------+                   +---------+        +-----------+
 |Status|<--------------+-->|Waterfall|<-------|Web Browser|
 +------+               |   +---------+        +-----------+
    |  +-----+          |
    v        v          |
+-------+  +-------+    |     +---+            +----------+
|Builder|  |Builder|    +---->|IRC|<----------->IRC Server|
|Status |  |Status |    |     +---+            +----------+
+-------+  +-------+    |
    |  +----+           |
    v       v           |   +------------+     +----+
+------+  +------+      +-->|MailNotifier|---->|SMTP|
|Build |  |Build |          +------------+     +----+
|Status|  |Status|
+------+  +------+
    | +-----+
    v       v
+------+  +------+
|Step  |  |Step  |
|Status|  |Status|
+------+  +------+
   | +---+
   v     v
+----+ +----+
|Log | |Log |
|File| |File|
+----+ +----+

   The configuration file controls which status plugins are active.
Each status plugin gets a reference to the top-level Status object.
From there they can request information on each Builder, Build, Step,
and LogFile. This query-on-demand interface is used by the
html.Waterfall plugin to create the main status page each time a web
browser hits the main URL.

   The status plugins can also subscribe to hear about new Builds as
they occur: this is used by the MailNotifier to create new email
messages for each recently-completed Build.

   The Status object records the status of old builds on disk in the
buildmaster's base directory. This allows it to return information
about historical builds.

   There are also status objects that correspond to Schedulers and
BuildSlaves. These allow status plugins to report information about
upcoming builds, and the online/offline status of each buildslave.


File: buildbot.info,  Node: Control Flow,  Prev: System Architecture,  Up: Introduction

1.3 Control Flow
================

A day in the life of the buildbot:

   * A developer commits some source code changes to the repository.
     A hook script or commit trigger of some sort sends information
     about this change to the buildmaster through one of its
     configured Change Sources. This notification might arrive via
     email, or over a network connection (either initiated by the
     buildmaster as it "subscribes" to changes, or by the commit
     trigger as it pushes Changes towards the buildmaster). The
     Change contains information about who made the change, what
     files were modified, which revision contains the change, and any
     checkin comments.

   * The buildmaster distributes this change to all of its configured
     Schedulers. Any "important" changes cause the "tree-stable-timer"
     to be started, and the Change is added to a list of those that
     will go into a new Build. When the timer expires, a Build is
     started on each of a set of configured Builders, all
     compiling/testing the same source code. Unless configured
     otherwise, all Builds run in parallel on the various buildslaves.

   * The Build consists of a series of Steps. Each Step causes some
     number of commands to be invoked on the remote buildslave
     associated with that Builder. The first step is almost always to
     perform a checkout of the appropriate revision from the same VC
     system that produced the Change. The rest generally perform a
     compile and run unit tests. As each Step runs, the buildslave
     reports back command output and return status to the buildmaster.

   * As the Build runs, status messages like "Build Started", "Step
     Started", "Build Finished", etc, are published to a collection of
     Status Targets. One of these targets is usually the HTML
     "Waterfall" display, which shows a chronological list of events,
     and summarizes the results of the most recent build at the top
     of each column.  Developers can periodically check this page to
     see how their changes have fared. If they see red, they know
     that they've made a mistake and need to fix it. If they see
     green, they know that they've done their duty and don't need to
     worry about their change breaking anything.

   * If a MailNotifier status target is active, the completion of a
     build will cause email to be sent to any developers whose
     Changes were incorporated into this Build. The MailNotifier can
     be configured to only send mail upon failing builds, or for
     builds which have just transitioned from passing to failing.
     Other status targets can provide similar real-time notification
     via different communication channels, like IRC.



File: buildbot.info,  Node: Installation,  Next: Concepts,  Prev: Introduction,  Up: Top

2 Installation
**************

* Menu:

* Requirements::
* Installing the code::
* Creating a buildmaster::
* Upgrading an Existing Buildmaster::
* Creating a buildslave::
* Launching the daemons::
* Logfiles::
* Shutdown::
* Maintenance::
* Troubleshooting::


File: buildbot.info,  Node: Requirements,  Next: Installing the code,  Prev: Installation,  Up: Installation

2.1 Requirements
================

At a bare minimum, you'll need the following (for both the buildmaster
and a buildslave):

   * Python: http://www.python.org

     Buildbot requires python-2.3 or later, and is primarily developed
     against python-2.4. It is also tested against python-2.5 .

   * Twisted: http://twistedmatrix.com

     Both the buildmaster and the buildslaves require Twisted-2.0.x or
     later. It has been tested against all releases of Twisted up to
     Twisted-2.5.0 (the most recent as of this writing). As always,
     the most recent version is recommended.

     Twisted is delivered as a collection of subpackages. You'll need
     at least "Twisted" (the core package), and you'll also want
     TwistedMail, TwistedWeb, and TwistedWords (for sending email,
     serving a web status page, and delivering build status via IRC,
     respectively). You might also want TwistedConch (for the
     encrypted Manhole debug port). Note that Twisted requires
     ZopeInterface to be installed as well.


   Certain other packages may be useful on the system running the
buildmaster:

   * CVSToys: http://purl.net/net/CVSToys

     If your buildmaster uses FreshCVSSource to receive change
     notification from a cvstoys daemon, it will require CVSToys be
     installed (tested with CVSToys-1.0.10). If the it doesn't use
     that source (i.e. if you only use a mail-parsing change source,
     or the SVN notification script), you will not need CVSToys.


   And of course, your project's build process will impose additional
requirements on the buildslaves. These hosts must have all the tools
necessary to compile and test your project's source code.


File: buildbot.info,  Node: Installing the code,  Next: Creating a buildmaster,  Prev: Requirements,  Up: Installation

2.2 Installing the code
=======================

The Buildbot is installed using the standard python `distutils'
module. After unpacking the tarball, the process is:

     python setup.py build
     python setup.py install

   where the install step may need to be done as root. This will put
the bulk of the code in somewhere like
/usr/lib/python2.3/site-packages/buildbot . It will also install the
`buildbot' command-line tool in /usr/bin/buildbot.

   To test this, shift to a different directory (like /tmp), and run:

     buildbot --version

   If it shows you the versions of Buildbot and Twisted, the install
went ok. If it says `no such command' or it gets an `ImportError'
when it tries to load the libaries, then something went wrong.
`pydoc buildbot' is another useful diagnostic tool.

   Windows users will find these files in other places. You will need
to make sure that python can find the libraries, and will probably
find it convenient to have `buildbot' on your PATH.

   If you wish, you can run the buildbot unit test suite like this:

     PYTHONPATH=. trial buildbot.test

   This should run up to 192 tests, depending upon what VC tools you
have installed. On my desktop machine it takes about five minutes to
complete. Nothing should fail, a few might be skipped. If any of the
tests fail, you should stop and investigate the cause before
continuing the installation process, as it will probably be easier to
track down the bug early.

   If you cannot or do not wish to install the buildbot into a
site-wide location like `/usr' or `/usr/local', you can also install
it into the account's home directory. Do the install command like
this:

     python setup.py install --home=~

   That will populate `~/lib/python' and create `~/bin/buildbot'.
Make sure this lib directory is on your `PYTHONPATH'.


File: buildbot.info,  Node: Creating a buildmaster,  Next: Upgrading an Existing Buildmaster,  Prev: Installing the code,  Up: Installation

2.3 Creating a buildmaster
==========================

As you learned earlier (*note System Architecture::), the buildmaster
runs on a central host (usually one that is publically visible, so
everybody can check on the status of the project), and controls all
aspects of the buildbot system. Let us call this host
`buildbot.example.org'.

   You may wish to create a separate user account for the buildmaster,
perhaps named `buildmaster'. This can help keep your personal
configuration distinct from that of the buildmaster and is useful if
you have to use a mail-based notification system (*note Change
Sources::). However, the Buildbot will work just fine with your
regular user account.

   You need to choose a directory for the buildmaster, called the
`basedir'. This directory will be owned by the buildmaster, which
will use configuration files therein, and create status files as it
runs. `~/Buildbot' is a likely value. If you run multiple
buildmasters in the same account, or if you run both masters and
slaves, you may want a more distinctive name like
`~/Buildbot/master/gnomovision' or `~/Buildmasters/fooproject'. If
you are using a separate user account, this might just be
`~buildmaster/masters/fooproject'.

   Once you've picked a directory, use the `buildbot create-master'
command to create the directory and populate it with startup files:

     buildbot create-master BASEDIR

   You will need to create a configuration file (*note
Configuration::) before starting the buildmaster. Most of the rest of
this manual is dedicated to explaining how to do this. A sample
configuration file is placed in the working directory, named
`master.cfg.sample', which can be copied to `master.cfg' and edited
to suit your purposes.

   (Internal details: This command creates a file named
`buildbot.tac' that contains all the state necessary to create the
buildmaster. Twisted has a tool called `twistd' which can use this
.tac file to create and launch a buildmaster instance. twistd takes
care of logging and daemonization (running the program in the
background). `/usr/bin/buildbot' is a front end which runs twistd for
you.)

   In addition to `buildbot.tac', a small `Makefile.sample' is
installed. This can be used as the basis for customized daemon
startup, *Note Launching the daemons::.


File: buildbot.info,  Node: Upgrading an Existing Buildmaster,  Next: Creating a buildslave,  Prev: Creating a buildmaster,  Up: Installation

2.4 Upgrading an Existing Buildmaster
=====================================

If you have just installed a new version of the Buildbot code, and you
have buildmasters that were created using an older version, you'll
need to upgrade these buildmasters before you can use them. The
upgrade process adds and modifies files in the buildmaster's base
directory to make it compatible with the new code.

     buildbot upgrade-master BASEDIR

   This command will also scan your `master.cfg' file for
incompatbilities (by loading it and printing any errors or deprecation
warnings that occur). Each buildbot release tries to be compatible
with configurations that worked cleanly (i.e. without deprecation
warnings) on the previous release: any functions or classes that are
to be removed will first be deprecated in a release, to give users a
chance to start using their replacement.

   The 0.7.6 release introduced the `public_html/' directory, which
contains `index.html' and other files served by the `WebStatus' and
`Waterfall' status displays. The `upgrade-master' command will create
these files if they do not already exist. It will not modify existing
copies, but it will write a new copy in e.g. `index.html.new' if the
new version differs from the version that already exists.

   The `upgrade-master' command is idempotent. It is safe to run it
multiple times. After each upgrade of the buildbot code, you should
use `upgrade-master' on all your buildmasters.


File: buildbot.info,  Node: Creating a buildslave,  Next: Launching the daemons,  Prev: Upgrading an Existing Buildmaster,  Up: Installation

2.5 Creating a buildslave
=========================

Typically, you will be adding a buildslave to an existing buildmaster,
to provide additional architecture coverage. The buildbot
administrator will give you several pieces of information necessary to
connect to the buildmaster. You should also be somewhat familiar with
the project being tested, so you can troubleshoot build problems
locally.

   The buildbot exists to make sure that the project's stated "how to
build it" process actually works. To this end, the buildslave should
run in an environment just like that of your regular developers.
Typically the project build process is documented somewhere
(`README', `INSTALL', etc), in a document that should mention all
library dependencies and contain a basic set of build instructions.
This document will be useful as you configure the host and account in
which the buildslave runs.

   Here's a good checklist for setting up a buildslave:

  1. Set up the account

     It is recommended (although not mandatory) to set up a separate
     user account for the buildslave. This account is frequently named
     `buildbot' or `buildslave'. This serves to isolate your personal
     working environment from that of the slave's, and helps to
     minimize the security threat posed by letting possibly-unknown
     contributors run arbitrary code on your system. The account
     should have a minimum of fancy init scripts.

  2. Install the buildbot code

     Follow the instructions given earlier (*note Installing the
     code::).  If you use a separate buildslave account, and you
     didn't install the buildbot code to a shared location, then you
     will need to install it with `--home=~' for each account that
     needs it.

  3. Set up the host

     Make sure the host can actually reach the buildmaster. Usually
     the buildmaster is running a status webserver on the same
     machine, so simply point your web browser at it and see if you
     can get there.  Install whatever additional packages or
     libraries the project's INSTALL document advises. (or not: if
     your buildslave is supposed to make sure that building without
     optional libraries still works, then don't install those
     libraries).

     Again, these libraries don't necessarily have to be installed to
     a site-wide shared location, but they must be available to your
     build process. Accomplishing this is usually very specific to
     the build process, so installing them to `/usr' or `/usr/local'
     is usually the best approach.

  4. Test the build process

     Follow the instructions in the INSTALL document, in the
     buildslave's account. Perform a full CVS (or whatever) checkout,
     configure, make, run tests, etc. Confirm that the build works
     without manual fussing.  If it doesn't work when you do it by
     hand, it will be unlikely to work when the buildbot attempts to
     do it in an automated fashion.

  5. Choose a base directory

     This should be somewhere in the buildslave's account, typically
     named after the project which is being tested. The buildslave
     will not touch any file outside of this directory. Something
     like `~/Buildbot' or `~/Buildslaves/fooproject' is appropriate.

  6. Get the buildmaster host/port, botname, and password

     When the buildbot admin configures the buildmaster to accept and
     use your buildslave, they will provide you with the following
     pieces of information:

        * your buildslave's name

        * the password assigned to your buildslave

        * the hostname and port number of the buildmaster, i.e.
          buildbot.example.org:8007

  7. Create the buildslave

     Now run the 'buildbot' command as follows:

          buildbot create-slave BASEDIR MASTERHOST:PORT SLAVENAME PASSWORD

     This will create the base directory and a collection of files
     inside, including the `buildbot.tac' file that contains all the
     information you passed to the `buildbot' command.

  8. Fill in the hostinfo files

     When it first connects, the buildslave will send a few files up
     to the buildmaster which describe the host that it is running
     on. These files are presented on the web status display so that
     developers have more information to reproduce any test failures
     that are witnessed by the buildbot. There are sample files in
     the `info' subdirectory of the buildbot's base directory. You
     should edit these to correctly describe you and your host.

     `BASEDIR/info/admin' should contain your name and email address.
     This is the "buildslave admin address", and will be visible from
     the build status page (so you may wish to munge it a bit if
     address-harvesting spambots are a concern).

     `BASEDIR/info/host' should be filled with a brief description of
     the host: OS, version, memory size, CPU speed, versions of
     relevant libraries installed, and finally the version of the
     buildbot code which is running the buildslave.

     If you run many buildslaves, you may want to create a single
     `~buildslave/info' file and share it among all the buildslaves
     with symlinks.


* Menu:

* Buildslave Options::


File: buildbot.info,  Node: Buildslave Options,  Prev: Creating a buildslave,  Up: Creating a buildslave

2.5.1 Buildslave Options
------------------------

There are a handful of options you might want to use when creating the
buildslave with the `buildbot create-slave <options> DIR <params>'
command. You can type `buildbot create-slave --help' for a summary.
To use these, just include them on the `buildbot create-slave'
command line, like this:

     buildbot create-slave --umask=022 ~/buildslave buildmaster.example.org:42012 myslavename mypasswd

`--usepty'
     This is a boolean flag that tells the buildslave whether to
     launch child processes in a PTY or with regular pipes (the
     default) when the master does not specify.  This option is
     deprecated, as this particular parameter is better specified on
     the master.

`--umask'
     This is a string (generally an octal representation of an
     integer) which will cause the buildslave process' "umask" value
     to be set shortly after initialization. The "twistd"
     daemonization utility forces the umask to 077 at startup (which
     means that all files created by the buildslave or its child
     processes will be unreadable by any user other than the
     buildslave account). If you want build products to be readable
     by other accounts, you can add `--umask=022' to tell the
     buildslave to fix the umask after twistd clobbers it. If you want
     build products to be _writable_ by other accounts too, use
     `--umask=000', but this is likely to be a security problem.

`--keepalive'
     This is a number that indicates how frequently "keepalive"
     messages should be sent from the buildslave to the buildmaster,
     expressed in seconds. The default (600) causes a message to be
     sent to the buildmaster at least once every 10 minutes. To set
     this to a lower value, use e.g. `--keepalive=120'.

     If the buildslave is behind a NAT box or stateful firewall, these
     messages may help to keep the connection alive: some NAT boxes
     tend to forget about a connection if it has not been used in a
     while. When this happens, the buildmaster will think that the
     buildslave has disappeared, and builds will time out. Meanwhile
     the buildslave will not realize than anything is wrong.

`--maxdelay'
     This is a number that indicates the maximum amount of time the
     buildslave will wait between connection attempts, expressed in
     seconds. The default (300) causes the buildslave to wait at most
     5 minutes before trying to connect to the buildmaster again.

`--log-size'
     This is the size in bytes when to rotate the Twisted log files.

`--log-count'
     This is the number of log rotations to keep around. You can
     either specify a number or `None' (the default) to keep all
     `twistd.log' files around.



File: buildbot.info,  Node: Launching the daemons,  Next: Logfiles,  Prev: Creating a buildslave,  Up: Installation

2.6 Launching the daemons
=========================

Both the buildmaster and the buildslave run as daemon programs. To
launch them, pass the working directory to the `buildbot' command:

     buildbot start BASEDIR

   This command will start the daemon and then return, so normally it
will not produce any output. To verify that the programs are indeed
running, look for a pair of files named `twistd.log' and `twistd.pid'
that should be created in the working directory.  `twistd.pid'
contains the process ID of the newly-spawned daemon.

   When the buildslave connects to the buildmaster, new directories
will start appearing in its base directory. The buildmaster tells the
slave to create a directory for each Builder which will be using that
slave.  All build operations are performed within these directories:
CVS checkouts, compiles, and tests.

   Once you get everything running, you will want to arrange for the
buildbot daemons to be started at boot time. One way is to use
`cron', by putting them in a @reboot crontab entry(1):

     @reboot buildbot start BASEDIR

   When you run `crontab' to set this up, remember to do it as the
buildmaster or buildslave account! If you add this to your crontab
when running as your regular account (or worse yet, root), then the
daemon will run as the wrong user, quite possibly as one with more
authority than you intended to provide.

   It is important to remember that the environment provided to cron
jobs and init scripts can be quite different that your normal runtime.
There may be fewer environment variables specified, and the PATH may
be shorter than usual. It is a good idea to test out this method of
launching the buildslave by using a cron job with a time in the near
future, with the same command, and then check `twistd.log' to make
sure the slave actually started correctly. Common problems here are
for `/usr/local' or `~/bin' to not be on your `PATH', or for
`PYTHONPATH' to not be set correctly.  Sometimes `HOME' is messed up
too.

   To modify the way the daemons are started (perhaps you want to set
some environment variables first, or perform some cleanup each time),
you can create a file named `Makefile.buildbot' in the base
directory. When the `buildbot' front-end tool is told to `start' the
daemon, and it sees this file (and `/usr/bin/make' exists), it will
do `make -f Makefile.buildbot start' instead of its usual action
(which involves running `twistd'). When the buildmaster or buildslave
is installed, a `Makefile.sample' is created which implements the
same behavior as the the `buildbot' tool uses, so if you want to
customize the process, just copy `Makefile.sample' to
`Makefile.buildbot' and edit it as necessary.

   Some distributions may include conveniences to make starting
buildbot at boot time easy.  For instance, with the default buildbot
package in Debian-based distributions, you may only need to modify
`/etc/default/buildbot' (see also `/etc/init.d/buildbot', which reads
the configuration in `/etc/default/buildbot').

   ---------- Footnotes ----------

   (1) this @reboot syntax is understood by Vixie cron, which is the
flavor usually provided with linux systems. Other unices may have a
cron that doesn't understand @reboot


File: buildbot.info,  Node: Logfiles,  Next: Shutdown,  Prev: Launching the daemons,  Up: Installation

2.7 Logfiles
============

While a buildbot daemon runs, it emits text to a logfile, named
`twistd.log'. A command like `tail -f twistd.log' is useful to watch
the command output as it runs.

   The buildmaster will announce any errors with its configuration
file in the logfile, so it is a good idea to look at the log at
startup time to check for any problems. Most buildmaster activities
will cause lines to be added to the log.


File: buildbot.info,  Node: Shutdown,  Next: Maintenance,  Prev: Logfiles,  Up: Installation

2.8 Shutdown
============

To stop a buildmaster or buildslave manually, use:

     buildbot stop BASEDIR

   This simply looks for the `twistd.pid' file and kills whatever
process is identified within.

   At system shutdown, all processes are sent a `SIGKILL'. The
buildmaster and buildslave will respond to this by shutting down
normally.

   The buildmaster will respond to a `SIGHUP' by re-reading its
config file. Of course, this only works on unix-like systems with
signal support, and won't work on Windows. The following shortcut is
available:

     buildbot reconfig BASEDIR

   When you update the Buildbot code to a new release, you will need
to restart the buildmaster and/or buildslave before it can take
advantage of the new code. You can do a `buildbot stop BASEDIR' and
`buildbot start BASEDIR' in quick succession, or you can use the
`restart' shortcut, which does both steps for you:

     buildbot restart BASEDIR

   There are certain configuration changes that are not handled
cleanly by `buildbot reconfig'. If this occurs, `buildbot restart' is
a more robust tool to fully switch over to the new configuration.

   `buildbot restart' may also be used to start a stopped Buildbot
instance. This behaviour is useful when writing scripts that stop,
start and restart Buildbot.

   A buildslave may also be gracefully shutdown from the *note
WebStatus:: status plugin. This is useful to shutdown a buildslave
without interrupting any current builds. The buildmaster will wait
until the buildslave is finished all its current builds, and will
then tell the buildslave to shutdown.


File: buildbot.info,  Node: Maintenance,  Next: Troubleshooting,  Prev: Shutdown,  Up: Installation

2.9 Maintenance
===============

It is a good idea to check the buildmaster's status page every once in
a while, to see if your buildslave is still online. Eventually the
buildbot will probably be enhanced to send you email (via the
`info/admin' email address) when the slave has been offline for more
than a few hours.

   If you find you can no longer provide a buildslave to the project,
please let the project admins know, so they can put out a call for a
replacement.

   The Buildbot records status and logs output continually, each time
a build is performed. The status tends to be small, but the build logs
can become quite large. Each build and log are recorded in a separate
file, arranged hierarchically under the buildmaster's base directory.
To prevent these files from growing without bound, you should
periodically delete old build logs. A simple cron job to delete
anything older than, say, two weeks should do the job. The only trick
is to leave the `buildbot.tac' and other support files alone, for
which find's `-mindepth' argument helps skip everything in the top
directory. You can use something like the following:

     @weekly cd BASEDIR && find . -mindepth 2 i-path './public_html/*' -prune -o -type f -mtime +14 -exec rm {} \;
     @weekly cd BASEDIR && find twistd.log* -mtime +14 -exec rm {} \;


File: buildbot.info,  Node: Troubleshooting,  Prev: Maintenance,  Up: Installation

2.10 Troubleshooting
====================

Here are a few hints on diagnosing common problems.

* Menu:

* Starting the buildslave::
* Connecting to the buildmaster::
* Forcing Builds::


File: buildbot.info,  Node: Starting the buildslave,  Next: Connecting to the buildmaster,  Prev: Troubleshooting,  Up: Troubleshooting

2.10.1 Starting the buildslave
------------------------------

Cron jobs are typically run with a minimal shell (`/bin/sh', not
`/bin/bash'), and tilde expansion is not always performed in such
commands. You may want to use explicit paths, because the `PATH' is
usually quite short and doesn't include anything set by your shell's
startup scripts (`.profile', `.bashrc', etc). If you've installed
buildbot (or other python libraries) to an unusual location, you may
need to add a `PYTHONPATH' specification (note that python will do
tilde-expansion on `PYTHONPATH' elements by itself). Sometimes it is
safer to fully-specify everything:

     @reboot PYTHONPATH=~/lib/python /usr/local/bin/buildbot start /usr/home/buildbot/basedir

   Take the time to get the @reboot job set up. Otherwise, things
will work fine for a while, but the first power outage or system
reboot you have will stop the buildslave with nothing but the cries
of sorrowful developers to remind you that it has gone away.


File: buildbot.info,  Node: Connecting to the buildmaster,  Next: Forcing Builds,  Prev: Starting the buildslave,  Up: Troubleshooting

2.10.2 Connecting to the buildmaster
------------------------------------

If the buildslave cannot connect to the buildmaster, the reason should
be described in the `twistd.log' logfile. Some common problems are an
incorrect master hostname or port number, or a mistyped bot name or
password. If the buildslave loses the connection to the master, it is
supposed to attempt to reconnect with an exponentially-increasing
backoff. Each attempt (and the time of the next attempt) will be
logged. If you get impatient, just manually stop and re-start the
buildslave.

   When the buildmaster is restarted, all slaves will be disconnected,
and will attempt to reconnect as usual. The reconnect time will depend
upon how long the buildmaster is offline (i.e. how far up the
exponential backoff curve the slaves have travelled). Again,
`buildbot stop BASEDIR; buildbot start BASEDIR' will speed up the
process.


File: buildbot.info,  Node: Forcing Builds,  Prev: Connecting to the buildmaster,  Up: Troubleshooting

2.10.3 Forcing Builds
---------------------

From the buildmaster's main status web page, you can force a build to
be run on your build slave. Figure out which column is for a builder
that runs on your slave, click on that builder's name, and the page
that comes up will have a "Force Build" button. Fill in the form, hit
the button, and a moment later you should see your slave's
`twistd.log' filling with commands being run. Using `pstree' or `top'
should also reveal the cvs/make/gcc/etc processes being run by the
buildslave. Note that the same web page should also show the `admin'
and `host' information files that you configured earlier.


File: buildbot.info,  Node: Concepts,  Next: Configuration,  Prev: Installation,  Up: Top

3 Concepts
**********

This chapter defines some of the basic concepts that the Buildbot
uses. You'll need to understand how the Buildbot sees the world to
configure it properly.

* Menu:

* Version Control Systems::
* Schedulers::
* BuildSet::
* BuildRequest::
* Builder::
* Users::
* Build Properties::


File: buildbot.info,  Node: Version Control Systems,  Next: Schedulers,  Prev: Concepts,  Up: Concepts

3.1 Version Control Systems
===========================

These source trees come from a Version Control System of some kind.
CVS and Subversion are two popular ones, but the Buildbot supports
others. All VC systems have some notion of an upstream `repository'
which acts as a server(1), from which clients can obtain source trees
according to various parameters. The VC repository provides source
trees of various projects, for different branches, and from various
points in time. The first thing we have to do is to specify which
source tree we want to get.

* Menu:

* Generalizing VC Systems::
* Source Tree Specifications::
* How Different VC Systems Specify Sources::
* Attributes of Changes::

   ---------- Footnotes ----------

   (1) except Darcs, but since the Buildbot never modifies its local
source tree we can ignore the fact that Darcs uses a less centralized
model


File: buildbot.info,  Node: Generalizing VC Systems,  Next: Source Tree Specifications,  Prev: Version Control Systems,  Up: Version Control Systems

3.1.1 Generalizing VC Systems
-----------------------------

For the purposes of the Buildbot, we will try to generalize all VC
systems as having repositories that each provide sources for a variety
of projects. Each project is defined as a directory tree with source
files. The individual files may each have revisions, but we ignore
that and treat the project as a whole as having a set of revisions
(CVS is really the only VC system still in widespread use that has
per-file revisions.. everything modern has moved to atomic tree-wide
changesets). Each time someone commits a change to the project, a new
revision becomes available. These revisions can be described by a
tuple with two items: the first is a branch tag, and the second is
some kind of revision stamp or timestamp. Complex projects may have
multiple branch tags, but there is always a default branch. The
timestamp may be an actual timestamp (such as the -D option to CVS),
or it may be a monotonically-increasing transaction number (such as
the change number used by SVN and P4, or the revision number used by
Arch/Baz/Bazaar, or a labeled tag used in CVS)(1). The SHA1 revision
ID used by Monotone, Mercurial, and Git is also a kind of revision
stamp, in that it specifies a unique copy of the source tree, as does
a Darcs "context" file.

   When we aren't intending to make any changes to the sources we
check out (at least not any that need to be committed back upstream),
there are two basic ways to use a VC system:

   * Retrieve a specific set of source revisions: some tag or key is
     used to index this set, which is fixed and cannot be changed by
     subsequent developers committing new changes to the tree.
     Releases are built from tagged revisions like this, so that they
     can be rebuilt again later (probably with controlled
     modifications).

   * Retrieve the latest sources along a specific branch: some tag is
     used to indicate which branch is to be used, but within that
     constraint we want to get the latest revisions.

   Build personnel or CM staff typically use the first approach: the
build that results is (ideally) completely specified by the two
parameters given to the VC system: repository and revision tag. This
gives QA and end-users something concrete to point at when reporting
bugs. Release engineers are also reportedly fond of shipping code that
can be traced back to a concise revision tag of some sort.

   Developers are more likely to use the second approach: each morning
the developer does an update to pull in the changes committed by the
team over the last day. These builds are not easy to fully specify: it
depends upon exactly when you did a checkout, and upon what local
changes the developer has in their tree. Developers do not normally
tag each build they produce, because there is usually significant
overhead involved in creating these tags. Recreating the trees used by
one of these builds can be a challenge. Some VC systems may provide
implicit tags (like a revision number), while others may allow the use
of timestamps to mean "the state of the tree at time X" as opposed to
a tree-state that has been explicitly marked.

   The Buildbot is designed to help developers, so it usually works in
terms of _the latest_ sources as opposed to specific tagged
revisions. However, it would really prefer to build from reproducible
source trees, so implicit revisions are used whenever possible.

   ---------- Footnotes ----------

   (1) many VC systems provide more complexity than this: in
particular the local views that P4 and ClearCase can assemble out of
various source directories are more complex than we're prepared to
take advantage of here


File: buildbot.info,  Node: Source Tree Specifications,  Next: How Different VC Systems Specify Sources,  Prev: Generalizing VC Systems,  Up: Version Control Systems

3.1.2 Source Tree Specifications
--------------------------------

So for the Buildbot's purposes we treat each VC system as a server
which can take a list of specifications as input and produce a source
tree as output. Some of these specifications are static: they are
attributes of the builder and do not change over time. Others are more
variable: each build will have a different value. The repository is
changed over time by a sequence of Changes, each of which represents a
single developer making changes to some set of files. These Changes
are cumulative(1).

   For normal builds, the Buildbot wants to get well-defined source
trees that contain specific Changes, and exclude other Changes that
may have occurred after the desired ones. We assume that the Changes
arrive at the buildbot (through one of the mechanisms described in
*note Change Sources::) in the same order in which they are committed
to the repository. The Buildbot waits for the tree to become "stable"
before initiating a build, for two reasons. The first is that
developers frequently make multiple related commits in quick
succession, even when the VC system provides ways to make atomic
transactions involving multiple files at the same time. Running a
build in the middle of these sets of changes would use an inconsistent
set of source files, and is likely to fail (and is certain to be less
useful than a build which uses the full set of changes). The
tree-stable-timer is intended to avoid these useless builds that
include some of the developer's changes but not all. The second reason
is that some VC systems (i.e. CVS) do not provide repository-wide
transaction numbers, so that timestamps are the only way to refer to
a specific repository state. These timestamps may be somewhat
ambiguous, due to processing and notification delays. By waiting until
the tree has been stable for, say, 10 minutes, we can choose a
timestamp from the middle of that period to use for our source
checkout, and then be reasonably sure that any clock-skew errors will
not cause the build to be performed on an inconsistent set of source
files.

   The Schedulers always use the tree-stable-timer, with a timeout
that is configured to reflect a reasonable tradeoff between build
latency and change frequency. When the VC system provides coherent
repository-wide revision markers (such as Subversion's revision
numbers, or in fact anything other than CVS's timestamps), the
resulting Build is simply performed against a source tree defined by
that revision marker. When the VC system does not provide this, a
timestamp from the middle of the tree-stable period is used to
generate the source tree(2).

   ---------- Footnotes ----------

   (1) Monotone's _multiple heads_ feature violates this assumption
of cumulative Changes, but in most situations the changes don't occur
frequently enough for this to be a significant problem

   (2) this `checkoutDelay' defaults to half the tree-stable timer,
but it can be overridden with an argument to the Source Step


File: buildbot.info,  Node: How Different VC Systems Specify Sources,  Next: Attributes of Changes,  Prev: Source Tree Specifications,  Up: Version Control Systems

3.1.3 How Different VC Systems Specify Sources
----------------------------------------------

For CVS, the static specifications are `repository' and `module'. In
addition to those, each build uses a timestamp (or omits the
timestamp to mean `the latest') and `branch tag' (which defaults to
HEAD). These parameters collectively specify a set of sources from
which a build may be performed.

   Subversion (http://subversion.tigris.org) combines the repository,
module, and branch into a single `Subversion URL' parameter. Within
that scope, source checkouts can be specified by a numeric `revision
number' (a repository-wide monotonically-increasing marker, such that
each transaction that changes the repository is indexed by a
different revision number), or a revision timestamp. When branches
are used, the repository and module form a static `baseURL', while
each build has a `revision number' and a `branch' (which defaults to a
statically-specified `defaultBranch'). The `baseURL' and `branch' are
simply concatenated together to derive the `svnurl' to use for the
checkout.

   Perforce (http://www.perforce.com/) is similar. The server is
specified through a `P4PORT' parameter. Module and branch are
specified in a single depot path, and revisions are depot-wide. When
branches are used, the `p4base' and `defaultBranch' are concatenated
together to produce the depot path.

   Arch (http://wiki.gnuarch.org/) and Bazaar
(http://bazaar.canonical.com/) specify a repository by URL, as well
as a `version' which is kind of like a branch name.  Arch uses the
word `archive' to represent the repository. Arch lets you push
changes from one archive to another, removing the strict
centralization required by CVS and SVN. It retains the distinction
between repository and working directory that most other VC systems
use. For complex multi-module directory structures, Arch has a
built-in `build config' layer with which the checkout process has two
steps. First, an initial bootstrap checkout is performed to retrieve
a set of build-config files. Second, one of these files is used to
figure out which archives/modules should be used to populate
subdirectories of the initial checkout.

   Builders which use Arch and Bazaar therefore have a static archive
`url', and a default "branch" (which is a string that specifies a
complete category-branch-version triple). Each build can have its own
branch (the category-branch-version string) to override the default,
as well as a revision number (which is turned into a -patch-NN suffix
when performing the checkout).

   Bzr (http://bazaar-vcs.org) (which is a descendant of Arch/Bazaar,
and is frequently referred to as "Bazaar") has the same sort of
repository-vs-workspace model as Arch, but the repository data can
either be stored inside the working directory or kept elsewhere
(either on the same machine or on an entirely different machine). For
the purposes of Buildbot (which never commits changes), the repository
is specified with a URL and a revision number.

   The most common way to obtain read-only access to a bzr tree is via
HTTP, simply by making the repository visible through a web server
like Apache. Bzr can also use FTP and SFTP servers, if the buildslave
process has sufficient privileges to access them. Higher performance
can be obtained by running a special Bazaar-specific server. None of
these matter to the buildbot: the repository URL just has to match the
kind of server being used. The `repoURL' argument provides the
location of the repository.

   Branches are expressed as subdirectories of the main central
repository, which means that if branches are being used, the BZR step
is given a `baseURL' and `defaultBranch' instead of getting the
`repoURL' argument.

   Darcs (http://darcs.net/) doesn't really have the notion of a
single master repository. Nor does it really have branches. In Darcs,
each working directory is also a repository, and there are operations
to push and pull patches from one of these `repositories' to another.
For the Buildbot's purposes, all you need to do is specify the URL of
a repository that you want to build from. The build slave will then
pull the latest patches from that repository and build them. Multiple
branches are implemented by using multiple repositories (possibly
living on the same server).

   Builders which use Darcs therefore have a static `repourl' which
specifies the location of the repository. If branches are being used,
the source Step is instead configured with a `baseURL' and a
`defaultBranch', and the two strings are simply concatenated together
to obtain the repository's URL. Each build then has a specific branch
which replaces `defaultBranch', or just uses the default one. Instead
of a revision number, each build can have a "context", which is a
string that records all the patches that are present in a given tree
(this is the output of `darcs changes --context', and is considerably
less concise than, e.g. Subversion's revision number, but the
patch-reordering flexibility of Darcs makes it impossible to provide
a shorter useful specification).

   Mercurial (http://selenic.com/mercurial) is like Darcs, in that
each branch is stored in a separate repository. The `repourl',
`baseURL', and `defaultBranch' arguments are all handled the same way
as with Darcs. The "revision", however, is the hash identifier
returned by `hg identify'.

   Git (http://git.or.cz/) also follows a decentralized model, and
each repository can have several branches and tags. The source Step is
configured with a static `repourl' which specifies the location of
the repository. In addition, an optional `branch' parameter can be
specified to check out code from a specific branch instead of the
default "master" branch. The "revision" is specified as a SHA1 hash
as returned by e.g. `git rev-parse'. No attempt is made to ensure
that the specified revision is actually a subset of the specified
branch.


File: buildbot.info,  Node: Attributes of Changes,  Prev: How Different VC Systems Specify Sources,  Up: Version Control Systems

3.1.4 Attributes of Changes
---------------------------

Who
===

Each Change has a `who' attribute, which specifies which developer is
responsible for the change. This is a string which comes from a
namespace controlled by the VC repository. Frequently this means it
is a username on the host which runs the repository, but not all VC
systems require this (Arch, for example, uses a fully-qualified `Arch
ID', which looks like an email address, as does Darcs).  Each
StatusNotifier will map the `who' attribute into something
appropriate for their particular means of communication: an email
address, an IRC handle, etc.

Files
=====

It also has a list of `files', which are just the tree-relative
filenames of any files that were added, deleted, or modified for this
Change. These filenames are used by the `fileIsImportant' function
(in the Scheduler) to decide whether it is worth triggering a new
build or not, e.g. the function could use the following function to
only run a build if a C file were checked in:

     def has_C_files(change):
         for name in change.files:
             if name.endswith(".c"):
                 return True
         return False

   Certain BuildSteps can also use the list of changed files to run a
more targeted series of tests, e.g. the `python_twisted.Trial' step
can run just the unit tests that provide coverage for the modified
.py files instead of running the full test suite.

Comments
========

The Change also has a `comments' attribute, which is a string
containing any checkin comments.

Revision
========

Each Change can have a `revision' attribute, which describes how to
get a tree with a specific state: a tree which includes this Change
(and all that came before it) but none that come after it. If this
information is unavailable, the `.revision' attribute will be `None'.
These revisions are provided by the ChangeSource, and consumed by the
`computeSourceRevision' method in the appropriate `step.Source' class.

`CVS'
     `revision' is an int, seconds since the epoch

`SVN'
     `revision' is an int, the changeset number (r%d)

`Darcs'
     `revision' is a large string, the output of `darcs changes
     --context'

`Mercurial'
     `revision' is a short string (a hash ID), the output of `hg
     identify'

`Arch/Bazaar'
     `revision' is the full revision ID (ending in -patch-%d)

`P4'
     `revision' is an int, the transaction number

`Git'
     `revision' is a short string (a SHA1 hash), the output of e.g.
     `git rev-parse'

Branches
========

The Change might also have a `branch' attribute. This indicates that
all of the Change's files are in the same named branch. The
Schedulers get to decide whether the branch should be built or not.

   For VC systems like CVS, Arch, Monotone, and Git, the `branch'
name is unrelated to the filename. (that is, the branch name and the
filename inhabit unrelated namespaces). For SVN, branches are
expressed as subdirectories of the repository, so the file's "svnurl"
is a combination of some base URL, the branch name, and the filename
within the branch. (In a sense, the branch name and the filename
inhabit the same namespace). Darcs branches are subdirectories of a
base URL just like SVN. Mercurial branches are the same as Darcs.

`CVS'
     branch='warner-newfeature', files=['src/foo.c']

`SVN'
     branch='branches/warner-newfeature', files=['src/foo.c']

`Darcs'
     branch='warner-newfeature', files=['src/foo.c']

`Mercurial'
     branch='warner-newfeature', files=['src/foo.c']

`Arch/Bazaar'
     branch='buildbot-usebranches-0', files=['buildbot/master.py']

`Git'
     branch='warner-newfeature', files=['src/foo.c']

Links
=====

Finally, the Change might have a `links' list, which is intended to
provide a list of URLs to a _viewcvs_-style web page that provides
more detail for this Change, perhaps including the full file diffs.


File: buildbot.info,  Node: Schedulers,  Next: BuildSet,  Prev: Version Control Systems,  Up: Concepts

3.2 Schedulers
==============

Each Buildmaster has a set of `Scheduler' objects, each of which gets
a copy of every incoming Change. The Schedulers are responsible for
deciding when Builds should be run. Some Buildbot installations might
have a single Scheduler, while others may have several, each for a
different purpose.

   For example, a "quick" scheduler might exist to give immediate
feedback to developers, hoping to catch obvious problems in the code
that can be detected quickly. These typically do not run the full test
suite, nor do they run on a wide variety of platforms. They also
usually do a VC update rather than performing a brand-new checkout
each time. You could have a "quick" scheduler which used a 30 second
timeout, and feeds a single "quick" Builder that uses a VC
`mode='update'' setting.

   A separate "full" scheduler would run more comprehensive tests a
little while later, to catch more subtle problems. This scheduler
would have a longer tree-stable-timer, maybe 30 minutes, and would
feed multiple Builders (with a `mode=' of `'copy'', `'clobber'', or
`'export'').

   The `tree-stable-timer' and `fileIsImportant' decisions are made
by the Scheduler. Dependencies are also implemented here.  Periodic
builds (those which are run every N seconds rather than after new
Changes arrive) are triggered by a special `Periodic' Scheduler
subclass. The default Scheduler class can also be told to watch for
specific branches, ignoring Changes on other branches. This may be
useful if you have a trunk and a few release branches which should be
tracked, but when you don't want to have the Buildbot pay attention
to several dozen private user branches.

   When the setup has multiple sources of Changes the `category' can
be used for `Scheduler' objects to filter out a subset of the
Changes.  Note that not all change sources can attach a category.

   Some Schedulers may trigger builds for other reasons, other than
recent Changes. For example, a Scheduler subclass could connect to a
remote buildmaster and watch for builds of a library to succeed before
triggering a local build that uses that library.

   Each Scheduler creates and submits `BuildSet' objects to the
`BuildMaster', which is then responsible for making sure the
individual `BuildRequests' are delivered to the target `Builders'.

   `Scheduler' instances are activated by placing them in the
`c['schedulers']' list in the buildmaster config file. Each Scheduler
has a unique name.


File: buildbot.info,  Node: BuildSet,  Next: BuildRequest,  Prev: Schedulers,  Up: Concepts

3.3 BuildSet
============

A `BuildSet' is the name given to a set of Builds that all
compile/test the same version of the tree on multiple Builders. In
general, all these component Builds will perform the same sequence of
Steps, using the same source code, but on different platforms or
against a different set of libraries.

   The `BuildSet' is tracked as a single unit, which fails if any of
the component Builds have failed, and therefore can succeed only if
_all_ of the component Builds have succeeded. There are two kinds of
status notification messages that can be emitted for a BuildSet: the
`firstFailure' type (which fires as soon as we know the BuildSet will
fail), and the `Finished' type (which fires once the BuildSet has
completely finished, regardless of whether the overall set passed or
failed).

   A `BuildSet' is created with a _source stamp_ tuple of (branch,
revision, changes, patch), some of which may be None, and a list of
Builders on which it is to be run. They are then given to the
BuildMaster, which is responsible for creating a separate
`BuildRequest' for each Builder.

   There are a couple of different likely values for the
`SourceStamp':

`(revision=None, changes=[CHANGES], patch=None)'
     This is a `SourceStamp' used when a series of Changes have
     triggered a build. The VC step will attempt to check out a tree
     that contains CHANGES (and any changes that occurred before
     CHANGES, but not any that occurred after them).

`(revision=None, changes=None, patch=None)'
     This builds the most recent code on the default branch. This is
     the sort of `SourceStamp' that would be used on a Build that was
     triggered by a user request, or a Periodic scheduler. It is also
     possible to configure the VC Source Step to always check out the
     latest sources rather than paying attention to the Changes in the
     SourceStamp, which will result in same behavior as this.

`(branch=BRANCH, revision=None, changes=None, patch=None)'
     This builds the most recent code on the given BRANCH. Again,
     this is generally triggered by a user request or Periodic build.

`(revision=REV, changes=None, patch=(LEVEL, DIFF))'
     This checks out the tree at the given revision REV, then applies
     a patch (using `patch -pLEVEL <DIFF'). The *note try:: feature
     uses this kind of `SourceStamp'. If `patch' is None, the patching
     step is bypassed.


   The buildmaster is responsible for turning the `BuildSet' into a
set of `BuildRequest' objects and queueing them on the appropriate
Builders.


File: buildbot.info,  Node: BuildRequest,  Next: Builder,  Prev: BuildSet,  Up: Concepts

3.4 BuildRequest
================

A `BuildRequest' is a request to build a specific set of sources on a
single specific `Builder'. Each `Builder' runs the `BuildRequest' as
soon as it can (i.e. when an associated buildslave becomes free).
`BuildRequest's are prioritized from oldest to newest, so when a
buildslave becomes free, the `Builder' with the oldest `BuildRequest'
is run.

   The `BuildRequest' contains the `SourceStamp' specification.  The
actual process of running the build (the series of Steps that will be
executed) is implemented by the `Build' object. In this future this
might be changed, to have the `Build' define _what_ gets built, and a
separate `BuildProcess' (provided by the Builder) to define _how_ it
gets built.

   `BuildRequest' is created with optional `Properties'.  One of
these, `owner', is collected by the resultant `Build' and added to
the set of _interested users_ to which status notifications will be
sent, depending on the configuration for each status object.

   The `BuildRequest' may be mergeable with other compatible
`BuildRequest's. Builds that are triggered by incoming Changes will
generally be mergeable. Builds that are triggered by user requests
are generally not, unless they are multiple requests to build the
_latest sources_ of the same branch.


File: buildbot.info,  Node: Builder,  Next: Users,  Prev: BuildRequest,  Up: Concepts

3.5 Builder
===========

The `Builder' is a long-lived object which controls all Builds of a
given type. Each one is created when the config file is first parsed,
and lives forever (or rather until it is removed from the config
file). It mediates the connections to the buildslaves that do all the
work, and is responsible for creating the `Build' objects that decide
_how_ a build is performed (i.e., which steps are executed in what
order).

   Each `Builder' gets a unique name, and the path name of a
directory where it gets to do all its work (there is a
buildmaster-side directory for keeping status information, as well as
a buildslave-side directory where the actual checkout/compile/test
commands are executed). It also gets a `BuildFactory', which is
responsible for creating new `Build' instances: because the `Build'
instance is what actually performs each build, choosing the
`BuildFactory' is the way to specify what happens each time a build
is done.

   Each `Builder' is associated with one of more `BuildSlaves'.  A
`Builder' which is used to perform OS-X builds (as opposed to Linux
or Solaris builds) should naturally be associated with an OS-X-based
buildslave.

   A `Builder' may be given a set of environment variables to be used
in its *note ShellCommand::s. These variables will override anything
in the buildslave's environment. Variables passed directly to a
ShellCommand will override variables of the same name passed to the
Builder.

   For example, if you a pool of identical slaves it is often easier
to manage variables like PATH from Buildbot rather than manually
editing it inside of the slaves' environment.

     f = factory.BuildFactory
     f.addStep(ShellCommand(
                   command=['bash', './configure']))
     f.addStep(Compile())

     c['builders'] = [
       {'name': 'test', 'slavenames': ['slave1', 'slave2', 'slave3', 'slave4',
                                        'slave5', 'slave6'],
         'builddir': 'test', 'factory': f',
         'env': {'PATH': '/opt/local/bin:/opt/app/bin:/usr/local/bin:/usr/bin'}}


File: buildbot.info,  Node: Users,  Next: Build Properties,  Prev: Builder,  Up: Concepts

3.6 Users
=========

Buildbot has a somewhat limited awareness of _users_. It assumes the
world consists of a set of developers, each of whom can be described
by a couple of simple attributes. These developers make changes to
the source code, causing builds which may succeed or fail.

   Each developer is primarily known through the source control
system. Each Change object that arrives is tagged with a `who' field
that typically gives the account name (on the repository machine) of
the user responsible for that change. This string is the primary key
by which the User is known, and is displayed on the HTML status pages
and in each Build's "blamelist".

   To do more with the User than just refer to them, this username
needs to be mapped into an address of some sort. The responsibility
for this mapping is left up to the status module which needs the
address. The core code knows nothing about email addresses or IRC
nicknames, just user names.

* Menu:

* Doing Things With Users::
* Email Addresses::
* IRC Nicknames::
* Live Status Clients::


File: buildbot.info,  Node: Doing Things With Users,  Next: Email Addresses,  Prev: Users,  Up: Users

3.6.1 Doing Things With Users
-----------------------------

Each Change has a single User who is responsible for that Change. Most
Builds have a set of Changes: the Build represents the first time
these Changes have been built and tested by the Buildbot. The build
has a "blamelist" that consists of a simple union of the Users
responsible for all the Build's Changes.

   The Build provides (through the IBuildStatus interface) a list of
Users who are "involved" in the build. For now this is equal to the
blamelist, but in the future it will be expanded to include a "build
sheriff" (a person who is "on duty" at that time and responsible for
watching over all builds that occur during their shift), as well as
per-module owners who simply want to keep watch over their domain
(chosen by subdirectory or a regexp matched against the filenames
pulled out of the Changes). The Involved Users are those who probably
have an interest in the results of any given build.

   In the future, Buildbot will acquire the concept of "Problems",
which last longer than builds and have beginnings and ends. For
example, a test case which passed in one build and then failed in the
next is a Problem. The Problem lasts until the test case starts
passing again, at which point the Problem is said to be "resolved".

   If there appears to be a code change that went into the tree at the
same time as the test started failing, that Change is marked as being
resposible for the Problem, and the user who made the change is added
to the Problem's "Guilty" list. In addition to this user, there may
be others who share responsibility for the Problem (module owners,
sponsoring developers). In addition to the Responsible Users, there
may be a set of Interested Users, who take an interest in the fate of
the Problem.

   Problems therefore have sets of Users who may want to be kept
aware of the condition of the problem as it changes over time. If
configured, the Buildbot can pester everyone on the Responsible list
with increasing harshness until the problem is resolved, with the
most harshness reserved for the Guilty parties themselves. The
Interested Users may merely be told when the problem starts and
stops, as they are not actually responsible for fixing anything.


File: buildbot.info,  Node: Email Addresses,  Next: IRC Nicknames,  Prev: Doing Things With Users,  Up: Users

3.6.2 Email Addresses
---------------------

The `buildbot.status.mail.MailNotifier' class (*note MailNotifier::)
provides a status target which can send email about the results of
each build. It accepts a static list of email addresses to which each
message should be delivered, but it can also be configured to send
mail to the Build's Interested Users. To do this, it needs a way to
convert User names into email addresses.

   For many VC systems, the User Name is actually an account name on
the system which hosts the repository. As such, turning the name into
an email address is a simple matter of appending
"@repositoryhost.com". Some projects use other kinds of mappings (for
example the preferred email address may be at "project.org" despite
the repository host being named "cvs.project.org"), and some VC
systems have full separation between the concept of a user and that
of an account on the repository host (like Perforce). Some systems
(like Arch) put a full contact email address in every change.

   To convert these names to addresses, the MailNotifier uses an
EmailLookup object. This provides a .getAddress method which accepts
a name and (eventually) returns an address. The default `MailNotifier'
module provides an EmailLookup which simply appends a static string,
configurable when the notifier is created. To create more complex
behaviors (perhaps using an LDAP lookup, or using "finger" on a
central host to determine a preferred address for the developer),
provide a different object as the `lookup' argument.

   In the future, when the Problem mechanism has been set up, the
Buildbot will need to send mail to arbitrary Users. It will do this
by locating a MailNotifier-like object among all the buildmaster's
status targets, and asking it to send messages to various Users. This
means the User-to-address mapping only has to be set up once, in your
MailNotifier, and every email message the buildbot emits will take
advantage of it.


File: buildbot.info,  Node: IRC Nicknames,  Next: Live Status Clients,  Prev: Email Addresses,  Up: Users

3.6.3 IRC Nicknames
-------------------

Like MailNotifier, the `buildbot.status.words.IRC' class provides a
status target which can announce the results of each build. It also
provides an interactive interface by responding to online queries
posted in the channel or sent as private messages.

   In the future, the buildbot can be configured map User names to IRC
nicknames, to watch for the recent presence of these nicknames, and to
deliver build status messages to the interested parties. Like
`MailNotifier' does for email addresses, the `IRC' object will have
an `IRCLookup' which is responsible for nicknames. The mapping can be
set up statically, or it can be updated by online users themselves
(by claiming a username with some kind of "buildbot: i am user
warner" commands).

   Once the mapping is established, the rest of the buildbot can ask
the `IRC' object to send messages to various users. It can report on
the likelihood that the user saw the given message (based upon how
long the user has been inactive on the channel), which might prompt
the Problem Hassler logic to send them an email message instead.


File: buildbot.info,  Node: Live Status Clients,  Prev: IRC Nicknames,  Up: Users

3.6.4 Live Status Clients
-------------------------

The Buildbot also offers a PB-based status client interface which can
display real-time build status in a GUI panel on the developer's
desktop.  This interface is normally anonymous, but it could be
configured to let the buildmaster know _which_ developer is using the
status client. The status client could then be used as a
message-delivery service, providing an alternative way to deliver
low-latency high-interruption messages to the developer (like "hey,
you broke the build").


File: buildbot.info,  Node: Build Properties,  Prev: Users,  Up: Concepts

3.7 Build Properties
====================

Each build has a set of "Build Properties", which can be used by its
BuildStep to modify their actions.  These properties, in the form of
key-value pairs, provide a general framework for dynamically altering
the behavior of a build based on its circumstances.

   Properties come from a number of places:
   * global configuration - These properties apply to all builds.

   * schedulers - A scheduler can specify properties available to all
     the builds it starts.

   * buildslaves - A buildslave can pass properties on to the builds
     it performs.

   * builds - A build automatically sets a number of properties on
     itself.

   * steps - Steps of a build can set properties that are available
     to subsequent steps.  In particular, source steps set a number
     of properties.

   Properties are very flexible, and can be used to implement all
manner of functionality.  Here are some examples:

   Most Source steps record the revision that they checked out in the
`got_revision' property.  A later step could use this property to
specify the name of a fully-built tarball, dropped in an
easily-acessible directory for later testing.

   Some projects want to perform nightly builds as well as in response
to committed changes.  Such a project would run two schedulers, both
pointing to the same set of builders, but could provide an
`is_nightly' property so that steps can distinguish the nightly
builds, perhaps to run more resource-intensive tests.

   Some projects have different build processes on different systems.
Rather than create a build factory for each slave, the steps can use
buildslave properties to identify the unique aspects of each slave
and adapt the build process dynamically.


File: buildbot.info,  Node: Configuration,  Next: Getting Source Code Changes,  Prev: Concepts,  Up: Top

4 Configuration
***************

The buildbot's behavior is defined by the "config file", which
normally lives in the `master.cfg' file in the buildmaster's base
directory (but this can be changed with an option to the `buildbot
create-master' command). This file completely specifies which
Builders are to be run, which slaves they should use, how Changes
should be tracked, and where the status information is to be sent.
The buildmaster's `buildbot.tac' file names the base directory;
everything else comes from the config file.

   A sample config file was installed for you when you created the
buildmaster, but you will need to edit it before your buildbot will do
anything useful.

   This chapter gives an overview of the format of this file and the
various sections in it. You will need to read the later chapters to
understand how to fill in each section properly.

* Menu:

* Config File Format::
* Loading the Config File::
* Testing the Config File::
* Defining the Project::
* Change Sources and Schedulers::
* Merging BuildRequests::
* Setting the slaveport::
* Buildslave Specifiers::
* On-Demand ("Latent") Buildslaves::
* Defining Global Properties::
* Defining Builders::
* Defining Status Targets::
* Debug options::


File: buildbot.info,  Node: Config File Format,  Next: Loading the Config File,  Prev: Configuration,  Up: Configuration

4.1 Config File Format
======================

The config file is, fundamentally, just a piece of Python code which
defines a dictionary named `BuildmasterConfig', with a number of keys
that are treated specially. You don't need to know Python to do basic
configuration, though, you can just copy the syntax of the sample
file. If you _are_ comfortable writing Python code, however, you can
use all the power of a full programming language to achieve more
complicated configurations.

   The `BuildmasterConfig' name is the only one which matters: all
other names defined during the execution of the file are discarded.
When parsing the config file, the Buildmaster generally compares the
old configuration with the new one and performs the minimum set of
actions necessary to bring the buildbot up to date: Builders which are
not changed are left untouched, and Builders which are modified get to
keep their old event history.

   Basic Python syntax: comments start with a hash character ("#"),
tuples are defined with `(parenthesis, pairs)', arrays are defined
with `[square, brackets]', tuples and arrays are mostly
interchangeable. Dictionaries (data structures which map "keys" to
"values") are defined with curly braces: `{'key1': 'value1', 'key2':
'value2'} '. Function calls (and object instantiation) can use named
parameters, like `w = html.Waterfall(http_port=8010)'.

   The config file starts with a series of `import' statements, which
make various kinds of Steps and Status targets available for later
use. The main `BuildmasterConfig' dictionary is created, then it is
populated with a variety of keys. These keys are broken roughly into
the following sections, each of which is documented in the rest of
this chapter:

   * Project Definitions

   * Change Sources / Schedulers

   * Slaveport

   * Buildslave Configuration

   * Builders / Interlocks

   * Status Targets

   * Debug options

   The config file can use a few names which are placed into its
namespace:

`basedir'
     the base directory for the buildmaster. This string has not been
     expanded, so it may start with a tilde. It needs to be expanded
     before use. The config file is located in
     `os.path.expanduser(os.path.join(basedir, 'master.cfg'))'



File: buildbot.info,  Node: Loading the Config File,  Next: Testing the Config File,  Prev: Config File Format,  Up: Configuration

4.2 Loading the Config File
===========================

The config file is only read at specific points in time. It is first
read when the buildmaster is launched. Once it is running, there are
various ways to ask it to reload the config file. If you are on the
system hosting the buildmaster, you can send a `SIGHUP' signal to it:
the `buildbot' tool has a shortcut for this:

     buildbot reconfig BASEDIR

   This command will show you all of the lines from `twistd.log' that
relate to the reconfiguration. If there are any problems during the
config-file reload, they will be displayed in these lines.

   The debug tool (`buildbot debugclient --master HOST:PORT') has a
"Reload .cfg" button which will also trigger a reload. In the future,
there will be other ways to accomplish this step (probably a
password-protected button on the web page, as well as a privileged IRC
command).

   When reloading the config file, the buildmaster will endeavor to
change as little as possible about the running system. For example,
although old status targets may be shut down and new ones started up,
any status targets that were not changed since the last time the
config file was read will be left running and untouched. Likewise any
Builders which have not been changed will be left running. If a
Builder is modified (say, the build process is changed) while a Build
is currently running, that Build will keep running with the old
process until it completes. Any previously queued Builds (or Builds
which get queued after the reconfig) will use the new process.


File: buildbot.info,  Node: Testing the Config File,  Next: Defining the Project,  Prev: Loading the Config File,  Up: Configuration

4.3 Testing the Config File
===========================

To verify that the config file is well-formed and contains no
deprecated or invalid elements, use the "checkconfig" command:

     % buildbot checkconfig master.cfg
     Config file is good!

   If the config file has deprecated features (perhaps because you've
upgraded the buildmaster and need to update the config file to match),
they will be announced by checkconfig. In this case, the config file
will work, but you should really remove the deprecated items and use
the recommended replacements instead:

     % buildbot checkconfig master.cfg
     /usr/lib/python2.4/site-packages/buildbot/master.py:559: DeprecationWarning: c['sources'] is
     deprecated as of 0.7.6 and will be removed by 0.8.0 . Please use c['change_source'] instead.
       warnings.warn(m, DeprecationWarning)
     Config file is good!

   If the config file is simply broken, that will be caught too:

     % buildbot checkconfig master.cfg
     Traceback (most recent call last):
       File "/usr/lib/python2.4/site-packages/buildbot/scripts/runner.py", line 834, in doCheckConfig
         ConfigLoader(configFile)
       File "/usr/lib/python2.4/site-packages/buildbot/scripts/checkconfig.py", line 31, in __init__
         self.loadConfig(configFile)
       File "/usr/lib/python2.4/site-packages/buildbot/master.py", line 480, in loadConfig
         exec f in localDict
       File "/home/warner/BuildBot/master/foolscap/master.cfg", line 90, in ?
         c[bogus] = "stuff"
     NameError: name 'bogus' is not defined


File: buildbot.info,  Node: Defining the Project,  Next: Change Sources and Schedulers,  Prev: Testing the Config File,  Up: Configuration

4.4 Defining the Project
========================

There are a couple of basic settings that you use to tell the buildbot
what project it is working on. This information is used by status
reporters to let users find out more about the codebase being
exercised by this particular Buildbot installation.

     c['projectName'] = "Buildbot"
     c['projectURL'] = "http://buildbot.sourceforge.net/"
     c['buildbotURL'] = "http://localhost:8010/"

   `projectName' is a short string will be used to describe the
project that this buildbot is working on. For example, it is used as
the title of the waterfall HTML page.

   `projectURL' is a string that gives a URL for the project as a
whole. HTML status displays will show `projectName' as a link to
`projectURL', to provide a link from buildbot HTML pages to your
project's home page.

   The `buildbotURL' string should point to the location where the
buildbot's internal web server (usually the `html.Waterfall' page) is
visible. This typically uses the port number set when you create the
`Waterfall' object: the buildbot needs your help to figure out a
suitable externally-visible host name.

   When status notices are sent to users (either by email or over
IRC), `buildbotURL' will be used to create a URL to the specific build
or problem that they are being notified about. It will also be made
available to queriers (over IRC) who want to find out where to get
more information about this buildbot.

   The `logCompressionLimit' enables bz2-compression of build logs on
disk for logs that are bigger than the given size, or disables that
completely if given `False'. The default value is 4k, which should be
a reasonable default on most file systems. This setting has no impact
on status plugins, and merely affects the required disk space on the
master for build logs.


File: buildbot.info,  Node: Change Sources and Schedulers,  Next: Merging BuildRequests,  Prev: Defining the Project,  Up: Configuration

4.5 Change Sources and Schedulers
=================================

The `c['change_source']' key is the ChangeSource instance(1) that
defines how the buildmaster learns about source code changes. More
information about what goes here is available in *Note Getting Source
Code Changes::.

     from buildbot.changes.pb import PBChangeSource
     c['change_source'] = PBChangeSource()
   
   (note: in buildbot-0.7.5 and earlier, this key was named
`c['sources']', and required a list. `c['sources']' is deprecated as
of buildbot-0.7.6 and is scheduled to be removed in a future release).

   `c['schedulers']' is a list of Scheduler instances, each of which
causes builds to be started on a particular set of Builders. The two
basic Scheduler classes you are likely to start with are `Scheduler'
and `Periodic', but you can write a customized subclass to implement
more complicated build scheduling.

   Scheduler arguments should always be specified by name (as keyword
arguments), to allow for future expansion:

     sched = Scheduler(name="quick", builderNames=['lin', 'win'])

   All schedulers have several arguments in common:

`name'
     Each Scheduler must have a unique name. This is used in status
     displays, and is also available in the build property
     `scheduler'.

`builderNames'
     This is the set of builders which this scheduler should trigger,
     specified as a list of names (strings).

`properties'
     This is a dictionary specifying properties that will be
     transmitted to all builds started by this scheduler.


   Here is a brief catalog of the available Scheduler types. All these
Schedulers are classes in `buildbot.scheduler', and the docstrings
there are the best source of documentation on the arguments taken by
each one.

* Menu:

* Scheduler Scheduler::
* AnyBranchScheduler::
* Dependent Scheduler::
* Periodic Scheduler::
* Nightly Scheduler::
* Try Schedulers::
* Triggerable Scheduler::

   ---------- Footnotes ----------

   (1) To be precise, it is an object or a list of objects which all
implement the `buildbot.interfaces.IChangeSource' Interface. It is
unusual to have multiple ChangeSources, so this key accepts either a
single ChangeSource or a sequence of them.


File: buildbot.info,  Node: Scheduler Scheduler,  Next: AnyBranchScheduler,  Prev: Change Sources and Schedulers,  Up: Change Sources and Schedulers

4.5.1 Scheduler Scheduler
-------------------------

This is the original and still most popular Scheduler class. It
follows exactly one branch, and starts a configurable
tree-stable-timer after each change on that branch. When the timer
expires, it starts a build on some set of Builders. The Scheduler
accepts a `fileIsImportant' function which can be used to ignore some
Changes if they do not affect any "important" files.

   The arguments to this scheduler are:

`name'

`builderNames'

`properties'

`branch'
     This Scheduler will pay attention to a single branch, ignoring
     Changes that occur on other branches. Setting `branch' equal to
     the special value of `None' means it should only pay attention to
     the default branch. Note that `None' is a keyword, not a string,
     so you want to use `None' and not `"None"'.

`treeStableTimer'
     The Scheduler will wait for this many seconds before starting the
     build. If new changes are made during this interval, the timer
     will be restarted, so really the build will be started after a
     change and then after this many seconds of inactivity.

`fileIsImportant'
     A callable which takes one argument, a Change instance, and
     returns `True' if the change is worth building, and `False' if
     it is not.  Unimportant Changes are accumulated until the build
     is triggered by an important change.  The default value of None
     means that all Changes are important.

`categories'
     A list of categories of changes that this scheduler will respond
     to.  If this is specified, then any non-matching changes are
     ignored.


   Example:

     from buildbot import scheduler
     quick = scheduler.Scheduler(name="quick",
                         branch=None,
                         treeStableTimer=60,
                         builderNames=["quick-linux", "quick-netbsd"])
     full = scheduler.Scheduler(name="full",
                         branch=None,
                         treeStableTimer=5*60,
                         builderNames=["full-linux", "full-netbsd", "full-OSX"])
     c['schedulers'] = [quick, full]

   In this example, the two "quick" builders are triggered 60 seconds
after the tree has been changed. The "full" builds do not run quite
so quickly (they wait 5 minutes), so hopefully if the quick builds
fail due to a missing file or really simple typo, the developer can
discover and fix the problem before the full builds are started. Both
Schedulers only pay attention to the default branch: any changes on
other branches are ignored by these Schedulers. Each Scheduler
triggers a different set of Builders, referenced by name.


File: buildbot.info,  Node: AnyBranchScheduler,  Next: Dependent Scheduler,  Prev: Scheduler Scheduler,  Up: Change Sources and Schedulers

4.5.2 AnyBranchScheduler
------------------------

This scheduler uses a tree-stable-timer like the default one, but
follows multiple branches at once. Each branch gets a separate timer.

   The arguments to this scheduler are:

`name'

`builderNames'

`properties'

`branches'
     This Scheduler will pay attention to any number of branches,
     ignoring Changes that occur on other branches. Branches are
     specified just as for the `Scheduler' class.

`treeStableTimer'
     The Scheduler will wait for this many seconds before starting the
     build. If new changes are made during this interval, the timer
     will be restarted, so really the build will be started after a
     change and then after this many seconds of inactivity.

`fileIsImportant'
     A callable which takes one argument, a Change instance, and
     returns `True' if the change is worth building, and `False' if
     it is not.  Unimportant Changes are accumulated until the build
     is triggered by an important change.  The default value of None
     means that all Changes are important.


File: buildbot.info,  Node: Dependent Scheduler,  Next: Periodic Scheduler,  Prev: AnyBranchScheduler,  Up: Change Sources and Schedulers

4.5.3 Dependent Scheduler
-------------------------

It is common to wind up with one kind of build which should only be
performed if the same source code was successfully handled by some
other kind of build first. An example might be a packaging step: you
might only want to produce .deb or RPM packages from a tree that was
known to compile successfully and pass all unit tests. You could put
the packaging step in the same Build as the compile and testing steps,
but there might be other reasons to not do this (in particular you
might have several Builders worth of compiles/tests, but only wish to
do the packaging once). Another example is if you want to skip the
"full" builds after a failing "quick" build of the same source code.
Or, if one Build creates a product (like a compiled library) that is
used by some other Builder, you'd want to make sure the consuming
Build is run _after_ the producing one.

   You can use "Dependencies" to express this relationship to the
Buildbot. There is a special kind of Scheduler named
`scheduler.Dependent' that will watch an "upstream" Scheduler for
builds to complete successfully (on all of its Builders). Each time
that happens, the same source code (i.e. the same `SourceStamp') will
be used to start a new set of builds, on a different set of Builders.
This "downstream" scheduler doesn't pay attention to Changes at all.
It only pays attention to the upstream scheduler.

   If the build fails on any of the Builders in the upstream set, the
downstream builds will not fire.  Note that, for SourceStamps
generated by a ChangeSource, the `revision' is None, meaning HEAD.
If any changes are committed between the time the upstream scheduler
begins its build and the time the dependent scheduler begins its
build, then those changes will be included in the downstream build.
See the *note Triggerable Scheduler:: for a more flexible dependency
mechanism that can avoid this problem.

   The arguments to this scheduler are:

`name'

`builderNames'

`properties'

`upstream'
     The upstream scheduler to watch.  Note that this is an
     "instance", not the name of the scheduler.

   Example:

     from buildbot import scheduler
     tests = scheduler.Scheduler("just-tests", None, 5*60,
                                 ["full-linux", "full-netbsd", "full-OSX"])
     package = scheduler.Dependent("build-package",
                                   tests, # upstream scheduler -- no quotes!
                                   ["make-tarball", "make-deb", "make-rpm"])
     c['schedulers'] = [tests, package]


File: buildbot.info,  Node: Periodic Scheduler,  Next: Nightly Scheduler,  Prev: Dependent Scheduler,  Up: Change Sources and Schedulers

4.5.4 Periodic Scheduler
------------------------

This simple scheduler just triggers a build every N seconds.

   The arguments to this scheduler are:

`name'

`builderNames'

`properties'

`periodicBuildTimer'
     The time, in seconds, after which to start a build.

   Example:

     from buildbot import scheduler
     nightly = scheduler.Periodic(name="nightly",
                     builderNames=["full-solaris"],
                     periodicBuildTimer=24*60*60)
     c['schedulers'] = [nightly]

   The Scheduler in this example just runs the full solaris build once
per day. Note that this Scheduler only lets you control the time
between builds, not the absolute time-of-day of each Build, so this
could easily wind up a "daily" or "every afternoon" scheduler
depending upon when it was first activated.


File: buildbot.info,  Node: Nightly Scheduler,  Next: Try Schedulers,  Prev: Periodic Scheduler,  Up: Change Sources and Schedulers

4.5.5 Nightly Scheduler
-----------------------

This is highly configurable periodic build scheduler, which triggers
a build at particular times of day, week, month, or year. The
configuration syntax is very similar to the well-known `crontab'
format, in which you provide values for minute, hour, day, and month
(some of which can be wildcards), and a build is triggered whenever
the current time matches the given constraints. This can run a build
every night, every morning, every weekend, alternate Thursdays, on
your boss's birthday, etc.

   Pass some subset of `minute', `hour', `dayOfMonth', `month', and
`dayOfWeek'; each may be a single number or a list of valid values.
The builds will be triggered whenever the current time matches these
values. Wildcards are represented by a '*' string. All fields default
to a wildcard except 'minute', so with no fields this defaults to a
build every hour, on the hour.  The full list of parameters is:

`name'

`builderNames'

`properties'

`branch'
     The branch to build, just as for `Scheduler'.

`minute'
     The minute of the hour on which to start the build.  This
     defaults to 0, meaning an hourly build.

`hour'
     The hour of the day on which to start the build, in 24-hour
     notation.  This defaults to *, meaning every hour.

`month'
     The month in which to start the build, with January = 1.  This
     defaults to *, meaning every month.

`dayOfWeek'
     The day of the week to start a build, with Monday = 0.  This
     defauls to *, meaning every day of the week.

`onlyIfChanged'
     If this is true, then builds will not be scheduled at the
     designated time unless the source has changed since the previous
     build.

   For example, the following master.cfg clause will cause a build to
be started every night at 3:00am:

     s = scheduler.Nightly(name='nightly',
             builderNames=['builder1', 'builder2'],
             hour=3,
             minute=0)

   This scheduler will perform a build each monday morning at 6:23am
and again at 8:23am, but only if someone has committed code in the
interim:

     s = scheduler.Nightly(name='BeforeWork',
              builderNames=['builder1'],
              dayOfWeek=0,
              hour=[6,8],
              minute=23,
              onlyIfChanged=True)

   The following runs a build every two hours, using Python's `range'
function:

     s = Nightly(name='every2hours',
             builderNames=['builder1'],
             hour=range(0, 24, 2))

   Finally, this example will run only on December 24th:

     s = Nightly(name='SleighPreflightCheck',
             builderNames=['flying_circuits', 'radar'],
             month=12,
             dayOfMonth=24,
             hour=12,
             minute=0)


File: buildbot.info,  Node: Try Schedulers,  Next: Triggerable Scheduler,  Prev: Nightly Scheduler,  Up: Change Sources and Schedulers

4.5.6 Try Schedulers
--------------------

This scheduler allows developers to use the `buildbot try' command to
trigger builds of code they have not yet committed. See *note try::
for complete details.

   Two implementations are available: `Try_Jobdir' and
`Try_Userpass'.  The former monitors a job directory, specified by
the `jobdir' parameter, while the latter listens for PB connections
on a specific `port', and authenticates against `userport'.


File: buildbot.info,  Node: Triggerable Scheduler,  Prev: Try Schedulers,  Up: Change Sources and Schedulers

4.5.7 Triggerable Scheduler
---------------------------

The `Triggerable' scheduler waits to be triggered by a Trigger step
(see *note Triggering Schedulers::) in another build. That step can
optionally wait for the scheduler's builds to complete. This provides
two advantages over Dependent schedulers. First, the same scheduler
can be triggered from multiple builds. Second, the ability to wait
for a Triggerable's builds to complete provides a form of "subroutine
call", where one or more builds can "call" a scheduler to perform
some work for them, perhaps on other buildslaves.

   The parameters are just the basics:

`name'

`builderNames'

`properties'

   This class is only useful in conjunction with the `Trigger' step.
Here is a fully-worked example:

     from buildbot import scheduler
     from buildbot.process import factory
     from buildbot.steps import trigger

     checkin = scheduler.Scheduler(name="checkin",
                 branch=None,
                 treeStableTimer=5*60,
                 builderNames=["checkin"])
     nightly = scheduler.Nightly(name='nightly',
                 builderNames=['nightly'],
                 hour=3,
                 minute=0)

     mktarball = scheduler.Triggerable(name="mktarball",
                     builderNames=["mktarball"])
     build = scheduler.Triggerable(name="build-all-platforms",
                     builderNames=["build-all-platforms"])
     test = scheduler.Triggerable(name="distributed-test",
                     builderNames=["distributed-test"])
     package = scheduler.Triggerable(name="package-all-platforms",
                     builderNames=["package-all-platforms"])

     c['schedulers'] = [checkin, nightly, build, test, package]

     # on checkin, make a tarball, build it, and test it
     checkin_factory = factory.BuildFactory()
     checkin_factory.addStep(trigger.Trigger(schedulerNames=['mktarball'],
                                            waitForFinish=True))
     checkin_factory.addStep(trigger.Trigger(schedulerNames=['build-all-platforms'],
                                        waitForFinish=True))
     checkin_factory.addStep(trigger.Trigger(schedulerNames=['distributed-test'],
                                       waitForFinish=True))

     # and every night, make a tarball, build it, and package it
     nightly_factory = factory.BuildFactory()
     nightly_factory.addStep(trigger.Trigger(schedulerNames=['mktarball'],
                                            waitForFinish=True))
     nightly_factory.addStep(trigger.Trigger(schedulerNames=['build-all-platforms'],
                                        waitForFinish=True))
     nightly_factory.addStep(trigger.Trigger(schedulerNames=['package-all-platforms'],
                                          waitForFinish=True))


File: buildbot.info,  Node: Merging BuildRequests,  Next: Setting the slaveport,  Prev: Change Sources and Schedulers,  Up: Configuration

4.6 Merging BuildRequests
=========================

By default, buildbot merges BuildRequests that have the compatible
SourceStamps. This behaviour can be customized with the
`c['mergeRequests']' configuration key.  This key specifies a function
which is caleld with three arguments: a `Builder' and two
`BuildRequest' objects.  It should return true if the requests can be
merged.  For example:

     def mergeRequests(builder, req1, req2):
         """Don't merge buildrequest at all"""
         return False
     c['mergeRequests'] = mergeRequests

   In many cases, the details of the SourceStamps and BuildRequests
are important.  In this example, only BuildRequests with the same
"reason" are merged; thus developers forcing builds for different
reasons will see distinct builds.

     def mergeRequests(builder, req1, req2):
         if req1.source.canBeMergedWith(req2.source) and  req1.reason == req2.reason:
            return True
         return False
     c['mergeRequests'] = mergeRequests


File: buildbot.info,  Node: Setting the slaveport,  Next: Buildslave Specifiers,  Prev: Merging BuildRequests,  Up: Configuration

4.7 Setting the slaveport
=========================

The buildmaster will listen on a TCP port of your choosing for
connections from buildslaves. It can also use this port for
connections from remote Change Sources, status clients, and debug
tools. This port should be visible to the outside world, and you'll
need to tell your buildslave admins about your choice.

   It does not matter which port you pick, as long it is externally
visible, however you should probably use something larger than 1024,
since most operating systems don't allow non-root processes to bind to
low-numbered ports. If your buildmaster is behind a firewall or a NAT
box of some sort, you may have to configure your firewall to permit
inbound connections to this port.

     c['slavePortnum'] = 10000

   `c['slavePortnum']' is a _strports_ specification string, defined
in the `twisted.application.strports' module (try `pydoc
twisted.application.strports' to get documentation on the format).
This means that you can have the buildmaster listen on a
localhost-only port by doing:

     c['slavePortnum'] = "tcp:10000:interface=127.0.0.1"

   This might be useful if you only run buildslaves on the same
machine, and they are all configured to contact the buildmaster at
`localhost:10000'.


File: buildbot.info,  Node: Buildslave Specifiers,  Next: On-Demand ("Latent") Buildslaves,  Prev: Setting the slaveport,  Up: Configuration

4.8 Buildslave Specifiers
=========================

The `c['slaves']' key is a list of known buildslaves. In the common
case, each buildslave is defined by an instance of the BuildSlave
class.  It represents a standard, manually started machine that will
try to connect to the buildbot master as a slave.  Contrast these
with the "on-demand" latent buildslaves, such as the Amazon Web
Service Elastic Compute Cloud latent buildslave discussed below.

   The BuildSlave class is instantiated with two values: (slavename,
slavepassword). These are the same two values that need to be
provided to the buildslave administrator when they create the
buildslave.

   The slavenames must be unique, of course. The password exists to
prevent evildoers from interfering with the buildbot by inserting
their own (broken) buildslaves into the system and thus displacing the
real ones.

   Buildslaves with an unrecognized slavename or a non-matching
password will be rejected when they attempt to connect, and a message
describing the problem will be put in the log file (see *note
Logfiles::).

     from buildbot.buildslave import BuildSlave
     c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd')
                    BuildSlave('bot-bsd', 'bsdpasswd')
                   ]

   `BuildSlave' objects can also be created with an optional
`properties' argument, a dictionary specifying properties that will
be available to any builds performed on this slave.  For example:

     from buildbot.buildslave import BuildSlave
     c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd',
                         properties={'os':'solaris'}),
                   ]

   The `BuildSlave' constructor can also take an optional
`max_builds' parameter to limit the number of builds that it will
execute simultaneously:

     from buildbot.buildslave import BuildSlave
     c['slaves'] = [BuildSlave("bot-linux", "linuxpassword", max_builds=2)]

   Historical note: in buildbot-0.7.5 and earlier, the `c['bots']'
key was used instead, and it took a list of (name, password) tuples.
This key is accepted for backwards compatibility, but is deprecated as
of 0.7.6 and will go away in some future release.

* Menu:

* When Buildslaves Go Missing::


File: buildbot.info,  Node: When Buildslaves Go Missing,  Up: Buildslave Specifiers

4.8.1 When Buildslaves Go Missing
---------------------------------

Sometimes, the buildslaves go away. One very common reason for this is
when the buildslave process is started once (manually) and left
running, but then later the machine reboots and the process is not
automatically restarted.

   If you'd like to have the administrator of the buildslave (or other
people) be notified by email when the buildslave has been missing for
too long, just add the `notify_on_missing=' argument to the
`BuildSlave' definition:

     c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd',
                               notify_on_missing="bob@example.com"),
                   ]

   By default, this will send email when the buildslave has been
disconnected for more than one hour. Only one email per
connection-loss event will be sent. To change the timeout, use
`missing_timeout=' and give it a number of seconds (the default is
3600).

   You can have the buildmaster send email to multiple recipients:
just provide a list of addresses instead of a single one:

     c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd',
                               notify_on_missing=["bob@example.com",
                                                  "alice@example.org"],
                               missing_timeout=300, # notify after 5 minutes
                               ),
                   ]

   The email sent this way will use a MailNotifier (*note
MailNotifier::) status target, if one is configured. This provides a
way for you to control the "from" address of the email, as well as
the relayhost (aka "smarthost") to use as an SMTP server. If no
MailNotifier is configured on this buildmaster, the
buildslave-missing emails will be sent using a default configuration.

   Note that if you want to have a MailNotifier for buildslave-missing
emails but not for regular build emails, just create one with
builders=[], as follows:

     from buildbot.status import mail
     m = mail.MailNotifier(fromaddr="buildbot@localhost", builders=[],
                           relayhost="smtp.example.org")
     c['status'].append(m)
     c['slaves'] = [BuildSlave('bot-solaris', 'solarispasswd',
                               notify_on_missing="bob@example.com"),
                   ]


File: buildbot.info,  Node: On-Demand ("Latent") Buildslaves,  Next: Defining Global Properties,  Prev: Buildslave Specifiers,  Up: Configuration

4.9 On-Demand ("Latent") Buildslaves
====================================

The standard buildbot model has slaves started manually.  The
previous section described how to configure the master for this
approach.

   Another approach is to let the buildbot master start slaves when
builds are ready, on-demand.  Thanks to services such as Amazon Web
Services' Elastic Compute Cloud ("AWS EC2"), this is relatively easy
to set up, and can be very useful for some situations.

   The buildslaves that are started on-demand are called "latent"
buildslaves.  As of this writing, buildbot ships with an abstract
base class for building latent buildslaves, and a concrete
implementation for AWS EC2.

* Menu:

* Amazon Web Services Elastic Compute Cloud ("AWS EC2")::
* Dangers with Latent Buildslaves::
* Writing New Latent Buildslaves::


File: buildbot.info,  Node: Amazon Web Services Elastic Compute Cloud ("AWS EC2"),  Next: Dangers with Latent Buildslaves,  Up: On-Demand ("Latent") Buildslaves

4.9.1 Amazon Web Services Elastic Compute Cloud ("AWS EC2")
-----------------------------------------------------------

AWS EC2 is a web service that allows you to start virtual machines in
an Amazon data center. Please see their website for details, incuding
costs. Using the AWS EC2 latent buildslaves involves getting an EC2
account with AWS and setting up payment; customizing one or more EC2
machine images ("AMIs") on your desired operating system(s) and
publishing them (privately if needed); and configuring the buildbot
master to know how to start your customized images for
"substantiating" your latent slaves.

* Menu:

* Get an AWS EC2 Account::
* Create an AMI::
* Configure the Master with an EC2LatentBuildSlave::


File: buildbot.info,  Node: Get an AWS EC2 Account,  Next: Create an AMI,  Up: Amazon Web Services Elastic Compute Cloud ("AWS EC2")

4.9.1.1 Get an AWS EC2 Account
..............................

To start off, to use the AWS EC2 latent buildslave, you need to get
an AWS developer account and sign up for EC2. These instructions may
help you get started:

   * Go to http://aws.amazon.com/ and click to "Sign Up Now" for an
     AWS account.

   * Once you are logged into your account, you need to sign up for
     EC2.  Instructions for how to do this have changed over time
     because Amazon changes their website, so the best advice is to
     hunt for it. After signing up for EC2, it may say it wants you
     to upload an x.509 cert. You will need this to create images
     (see below) but it is not technically necessary for the buildbot
     master configuration.

   * You must enter a valid credit card before you will be able to
     use EC2. Do that under 'Payment Method'.

   * Make sure you're signed up for EC2 by going to 'Your
     Account'->'Account Activity' and verifying EC2 is listed.


File: buildbot.info,  Node: Create an AMI,  Next: Configure the Master with an EC2LatentBuildSlave,  Prev: Get an AWS EC2 Account,  Up: Amazon Web Services Elastic Compute Cloud ("AWS EC2")

4.9.1.2 Create an AMI
.....................

Now you need to create an AMI and configure the master.  You may need
to run through this cycle a few times to get it working, but these
instructions should get you started.

   Creating an AMI is out of the scope of this document.  The EC2
Getting Started Guide is a good resource for this task.  Here are a
few additional hints.

   * When an instance of the image starts, it needs to automatically
     start a buildbot slave that connects to your master (to create a
     buildbot slave, *note Creating a buildslave::; to make a daemon,
     *note Launching the daemons::).

   * You may want to make an instance of the buildbot slave,
     configure it as a standard buildslave in the master (i.e., not
     as a latent slave), and test and debug it that way before you
     turn it into an AMI and convert to a latent slave in the master.


File: buildbot.info,  Node: Configure the Master with an EC2LatentBuildSlave,  Prev: Create an AMI,  Up: Amazon Web Services Elastic Compute Cloud ("AWS EC2")

4.9.1.3 Configure the Master with an EC2LatentBuildSlave
........................................................

Now let's assume you have an AMI that should work with the
EC2LatentBuildSlave.  It's now time to set up your buildbot master
configuration.

   You will need some information from your AWS account: the "Access
Key Id" and the "Secret Access Key".  If you've built the AMI
yourself, you probably already are familiar with these values.  If
you have not, and someone has given you access to an AMI, these hints
may help you find the necessary values:

   * While logged into your AWS account, find the "Access
     Identifiers" link (either on the left, or via "Your Account" ->
     "Access Identifiers".

   * On the page, you'll see alphanumeric values for "Your Access Key
     Id:" and "Your Secret Access Key:". Make a note of these. Later
     on, we'll call the first one your "identifier" and the second
     one your "secret_identifier."

   When creating an EC2LatentBuildSlave in the buildbot master
configuration, the first three arguments are required.  The name and
password are the first two arguments, and work the same as with
normal buildslaves.  The next argument specifies the type of the EC2
virtual machine (available options as of this writing include
"m1.small", "m1.large", 'm1.xlarge", "c1.medium", and "c1.xlarge";
see the EC2 documentation for descriptions of these machines).

   Here is the simplest example of configuring an EC2 latent
buildslave. It specifies all necessary remaining values explicitly in
the instantiation.

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     c['slaves'] = [EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large',
                                        ami='ami-12345',
                                        identifier='publickey',
                                        secret_identifier='privatekey'
                                        )]

   The "ami" argument specifies the AMI that the master should start.
The "identifier" argument specifies the AWS "Access Key Id," and the
"secret_identifier" specifies the AWS "Secret Access Key." Both the
AMI and the account information can be specified in alternate ways.

   Note that whoever has your identifier and secret_identifier values
can request AWS work charged to your account, so these values need to
be carefully protected. Another way to specify these access keys is
to put them in a separate file. You can then make the access
privileges stricter for this separate file, and potentially let more
people read your main configuration file.

   By default, you can make an .ec2 directory in the home folder of
the user running the buildbot master. In that directory, create a
file called aws_id.  The first line of that file should be your
access key id; the second line should be your secret access key id.
Then you can instantiate the build slave as follows.

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     c['slaves'] = [EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large',
                                        ami='ami-12345')]

   If you want to put the key information in another file, use the
"aws_id_file_path" initialization argument.

   Previous examples used a particular AMI.  If the Buildbot master
will be deployed in a process-controlled environment, it may be
convenient to specify the AMI more flexibly.  Rather than specifying
an individual AMI, specify one or two AMI filters.

   In all cases, the AMI that sorts last by its location (the S3
bucket and manifest name) will be preferred.

   One available filter is to specify the acceptable AMI owners, by
AWS account number (the 12 digit number, usually rendered in AWS with
hyphens like "1234-5678-9012", should be entered as in integer).

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     bot1 = EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large',
                                valid_ami_owners=[11111111111,
                                                  22222222222],
                                identifier='publickey',
                                secret_identifier='privatekey'
                                )

   The other available filter is to provide a regular expression
string that will be matched against each AMI's location (the S3
bucket and manifest name).

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     bot1 = EC2LatentBuildSlave(
         'bot1', 'sekrit', 'm1.large',
         valid_ami_location_regex=r'buildbot\-.*/image.manifest.xml',
         identifier='publickey', secret_identifier='privatekey')

   The regular expression can specify a group, which will be
preferred for the sorting.  Only the first group is used; subsequent
groups are ignored.

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     bot1 = EC2LatentBuildSlave(
         'bot1', 'sekrit', 'm1.large',
         valid_ami_location_regex=r'buildbot\-.*\-(.*)/image.manifest.xml',
         identifier='publickey', secret_identifier='privatekey')

   If the group can be cast to an integer, it will be.  This allows
10 to sort after 1, for instance.

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     bot1 = EC2LatentBuildSlave(
         'bot1', 'sekrit', 'm1.large',
         valid_ami_location_regex=r'buildbot\-.*\-(\d+)/image.manifest.xml',
         identifier='publickey', secret_identifier='privatekey')

   In addition to using the password as a handshake between the
master and the slave, you may want to use a firewall to assert that
only machines from a specific IP can connect as slaves.  This is
possible with AWS EC2 by using the Elastic IP feature.  To configure,
generate a Elastic IP in AWS, and then specify it in your
configuration using the "elastic_ip" argument.

     from buildbot.ec2buildslave import EC2LatentBuildSlave
     c['slaves'] = [EC2LatentBuildSlave('bot1', 'sekrit', 'm1.large',
                                        'ami-12345',
                                        identifier='publickey',
                                        secret_identifier='privatekey',
                                        elastic_ip='208.77.188.166'
                                        )]

   The EC2LatentBuildSlave supports all other configuration from the
standard BuildSlave.  The "missing_timeout" and "notify_on_missing"
specify how long to wait for an EC2 instance to attach before
considering the attempt to have failed, and email addresses to alert,
respectively.  "missing_timeout" defaults to 20 minutes.

   The "build_wait_timeout" allows you to specify how long an
EC2LatentBuildSlave should wait after a build for another build
before it shuts down the EC2 instance.  It defaults to 10 minutes.

   "keypair_name" and "security_name" allow you to specify different
names for these AWS EC2 values.  They both default to
"latent_buildbot_slave".


File: buildbot.info,  Node: Dangers with Latent Buildslaves,  Next: Writing New Latent Buildslaves,  Prev: Amazon Web Services Elastic Compute Cloud ("AWS EC2"),  Up: On-Demand ("Latent") Buildslaves

4.9.2 Dangers with Latent Buildslaves
-------------------------------------

Any latent build slave that interacts with a for-fee service, such as
the EC2LatentBuildSlave, brings significant risks. As already
identified, the configuraton will need access to account information
that, if obtained by a criminal, can be used to charge services to
your account. Also, bugs in the buildbot software may lead to
unnecessary charges. In particular, if the master neglects to shut
down an instance for some reason, a virtual machine may be running
unnecessarily, charging against your account. Manual and/or automatic
(e.g. nagios with a plugin using a library like boto) double-checking
may be appropriate.

   A comparitively trivial note is that currently if two instances
try to attach to the same latent buildslave, it is likely that the
system will become confused.  This should not occur, unless, for
instance, you configure a normal build slave to connect with the
authentication of a latent buildbot.  If the situation occurs, stop
all attached instances and restart the master.


File: buildbot.info,  Node: Writing New Latent Buildslaves,  Prev: Dangers with Latent Buildslaves,  Up: On-Demand ("Latent") Buildslaves

4.9.3 Writing New Latent Buildslaves
------------------------------------

Writing a new latent buildslave should only require subclassing
`buildbot.buildslave.AbstractLatentBuildSlave' and implementing
start_instance and stop_instance.

     def start_instance(self):
         # responsible for starting instance that will try to connect with this
         # master. Should return deferred. Problems should use an errback. The
         # callback value can be None, or can be an iterable of short strings to
         # include in the "substantiate success" status message, such as
         # identifying the instance that started.
         raise NotImplementedError

     def stop_instance(self, fast=False):
         # responsible for shutting down instance. Return a deferred. If `fast`,
         # we're trying to shut the master down, so callback as soon as is safe.
         # Callback value is ignored.
         raise NotImplementedError

   See `buildbot.ec2buildslave.EC2LatentBuildSlave' for an example,
or see the test example `buildbot.test_slaves.FakeLatentBuildSlave'.


File: buildbot.info,  Node: Defining Global Properties,  Next: Defining Builders,  Prev: On-Demand ("Latent") Buildslaves,  Up: Configuration

4.10 Defining Global Properties
===============================

The `'properties'' configuration key defines a dictionary of
properties that will be available to all builds started by the
buildmaster:

     c['properties'] = {
         'Widget-version' : '1.2',
         'release-stage' : 'alpha'
     }


File: buildbot.info,  Node: Defining Builders,  Next: Defining Status Targets,  Prev: Defining Global Properties,  Up: Configuration

4.11 Defining Builders
======================

The `c['builders']' key is a list of dictionaries which specify the
Builders. The Buildmaster runs a collection of Builders, each of
which handles a single type of build (e.g. full versus quick), on a
single build slave. A Buildbot which makes sure that the latest code
("HEAD") compiles correctly across four separate architecture will
have four Builders, each performing the same build but on different
slaves (one per platform).

   Each Builder gets a separate column in the waterfall display. In
general, each Builder runs independently (although various kinds of
interlocks can cause one Builder to have an effect on another).

   Each Builder specification dictionary has several required keys:

`name'
     This specifies the Builder's name, which is used in status
     reports.

`slavename'
     This specifies which buildslave will be used by this Builder.
     `slavename' must appear in the `c['slaves']' list. Each
     buildslave can accomodate multiple Builders.

`slavenames'
     If you provide `slavenames' instead of `slavename', you can give
     a list of buildslaves which are capable of running this Builder.
     If multiple buildslaves are available for any given Builder, you
     will have some measure of redundancy: in case one slave goes
     offline, the others can still keep the Builder working. In
     addition, multiple buildslaves will allow multiple simultaneous
     builds for the same Builder, which might be useful if you have a
     lot of forced or "try" builds taking place.

     If you use this feature, it is important to make sure that the
     buildslaves are all, in fact, capable of running the given
     build. The slave hosts should be configured similarly, otherwise
     you will spend a lot of time trying (unsuccessfully) to
     reproduce a failure that only occurs on some of the buildslaves
     and not the others. Different platforms, operating systems,
     versions of major programs or libraries, all these things mean
     you should use separate Builders.

`builddir'
     This specifies the name of a subdirectory (under the base
     directory) in which everything related to this builder will be
     placed. On the buildmaster, this holds build status information.
     On the buildslave, this is where checkouts, compiles, and tests
     are run.

`factory'
     This is a `buildbot.process.factory.BuildFactory' instance which
     controls how the build is performed. Full details appear in
     their own chapter, *Note Build Process::. Parameters like the
     location of the CVS repository and the compile-time options used
     for the build are generally provided as arguments to the
     factory's constructor.


   Other optional keys may be set on each Builder:

`category'
     If provided, this is a string that identifies a category for the
     builder to be a part of. Status clients can limit themselves to a
     subset of the available categories. A common use for this is to
     add new builders to your setup (for a new module, or for a new
     buildslave) that do not work correctly yet and allow you to
     integrate them with the active builders. You can put these new
     builders in a test category, make your main status clients
     ignore them, and have only private status clients pick them up.
     As soon as they work, you can move them over to the active
     category.



File: buildbot.info,  Node: Defining Status Targets,  Next: Debug options,  Prev: Defining Builders,  Up: Configuration

4.12 Defining Status Targets
============================

The Buildmaster has a variety of ways to present build status to
various users. Each such delivery method is a "Status Target" object
in the configuration's `status' list. To add status targets, you just
append more objects to this list:

     c['status'] = []

     from buildbot.status import html
     c['status'].append(html.Waterfall(http_port=8010))

     from buildbot.status import mail
     m = mail.MailNotifier(fromaddr="buildbot@localhost",
                           extraRecipients=["builds@lists.example.com"],
                           sendToInterestedUsers=False)
     c['status'].append(m)

     from buildbot.status import words
     c['status'].append(words.IRC(host="irc.example.com", nick="bb",
                                  channels=["#example"]))

   Status delivery has its own chapter, *Note Status Delivery::, in
which all the built-in status targets are documented.


File: buildbot.info,  Node: Debug options,  Prev: Defining Status Targets,  Up: Configuration

4.13 Debug options
==================

If you set `c['debugPassword']', then you can connect to the
buildmaster with the diagnostic tool launched by `buildbot
debugclient MASTER:PORT'. From this tool, you can reload the config
file, manually force builds, and inject changes, which may be useful
for testing your buildmaster without actually commiting changes to
your repository (or before you have the Change Sources set up). The
debug tool uses the same port number as the slaves do:
`c['slavePortnum']', and is authenticated with this password.

     c['debugPassword'] = "debugpassword"

   If you set `c['manhole']' to an instance of one of the classes in
`buildbot.manhole', you can telnet or ssh into the buildmaster and
get an interactive Python shell, which may be useful for debugging
buildbot internals. It is probably only useful for buildbot
developers. It exposes full access to the buildmaster's account
(including the ability to modify and delete files), so it should not
be enabled with a weak or easily guessable password.

   There are three separate `Manhole' classes. Two of them use SSH,
one uses unencrypted telnet. Two of them use a username+password
combination to grant access, one of them uses an SSH-style
`authorized_keys' file which contains a list of ssh public keys.

`manhole.AuthorizedKeysManhole'
     You construct this with the name of a file that contains one SSH
     public key per line, just like `~/.ssh/authorized_keys'. If you
     provide a non-absolute filename, it will be interpreted relative
     to the buildmaster's base directory.

`manhole.PasswordManhole'
     This one accepts SSH connections but asks for a username and
     password when authenticating. It accepts only one such pair.

`manhole.TelnetManhole'
     This accepts regular unencrypted telnet connections, and asks
     for a username/password pair before providing access. Because
     this username/password is transmitted in the clear, and because
     Manhole access to the buildmaster is equivalent to granting full
     shell privileges to both the buildmaster and all the buildslaves
     (and to all accounts which then run code produced by the
     buildslaves), it is highly recommended that you use one of the
     SSH manholes instead.


     # some examples:
     from buildbot import manhole
     c['manhole'] = manhole.AuthorizedKeysManhole(1234, "authorized_keys")
     c['manhole'] = manhole.PasswordManhole(1234, "alice", "mysecretpassword")
     c['manhole'] = manhole.TelnetManhole(1234, "bob", "snoop_my_password_please")

   The `Manhole' instance can be configured to listen on a specific
port. You may wish to have this listening port bind to the loopback
interface (sometimes known as "lo0", "localhost", or 127.0.0.1) to
restrict access to clients which are running on the same host.

     from buildbot.manhole import PasswordManhole
     c['manhole'] = PasswordManhole("tcp:9999:interface=127.0.0.1","admin","passwd")

   To have the `Manhole' listen on all interfaces, use `"tcp:9999"'
or simply 9999. This port specification uses
`twisted.application.strports', so you can make it listen on SSL or
even UNIX-domain sockets if you want.

   Note that using any Manhole requires that the TwistedConch package
be installed, and that you be using Twisted version 2.0 or later.

   The buildmaster's SSH server will use a different host key than the
normal sshd running on a typical unix host. This will cause the ssh
client to complain about a "host key mismatch", because it does not
realize there are two separate servers running on the same host. To
avoid this, use a clause like the following in your `.ssh/config'
file:

     Host remotehost-buildbot
      HostName remotehost
      HostKeyAlias remotehost-buildbot
      Port 9999
      # use 'user' if you use PasswordManhole and your name is not 'admin'.
      # if you use AuthorizedKeysManhole, this probably doesn't matter.
      User admin


File: buildbot.info,  Node: Getting Source Code Changes,  Next: Build Process,  Prev: Configuration,  Up: Top

5 Getting Source Code Changes
*****************************

The most common way to use the Buildbot is centered around the idea of
`Source Trees': a directory tree filled with source code of some form
which can be compiled and/or tested. Some projects use languages that
don't involve any compilation step: nevertheless there may be a
`build' phase where files are copied or rearranged into a form that
is suitable for installation. Some projects do not have unit tests,
and the Buildbot is merely helping to make sure that the sources can
compile correctly. But in all of these cases, the thing-being-tested
is a single source tree.

   A Version Control System mantains a source tree, and tells the
buildmaster when it changes. The first step of each Build is typically
to acquire a copy of some version of this tree.

   This chapter describes how the Buildbot learns about what Changes
have occurred. For more information on VC systems and Changes, see
*note Version Control Systems::.

* Menu:

* Change Sources::
* Choosing ChangeSources::
* CVSToys - PBService::
* Mail-parsing ChangeSources::
* PBChangeSource::
* P4Source::
* BonsaiPoller::
* SVNPoller::
* MercurialHook::
* Bzr Hook::
* Bzr Poller::


File: buildbot.info,  Node: Change Sources,  Next: Choosing ChangeSources,  Prev: Getting Source Code Changes,  Up: Getting Source Code Changes

5.1 Change Sources
==================

Each Buildmaster watches a single source tree. Changes can be provided
by a variety of ChangeSource types, however any given project will
typically have only a single ChangeSource active. This section
provides a description of all available ChangeSource types and
explains how to set up each of them.

   There are a variety of ChangeSources available, some of which are
meant to be used in conjunction with other tools to deliver Change
events from the VC repository to the buildmaster.

   * CVSToys This ChangeSource opens a TCP connection from the
     buildmaster to a waiting FreshCVS daemon that lives on the
     repository machine, and subscribes to hear about Changes.

   * MaildirSource This one watches a local maildir-format inbox for
     email sent out by the repository when a change is made. When a
     message arrives, it is parsed to create the Change object. A
     variety of parsing functions are available to accomodate
     different email-sending tools.

   * PBChangeSource This ChangeSource listens on a local TCP socket
     for inbound connections from a separate tool. Usually, this tool
     would be run on the VC repository machine in a commit hook. It
     is expected to connect to the TCP socket and send a Change
     message over the network connection. The `buildbot sendchange'
     command is one example of a tool that knows how to send these
     messages, so you can write a commit script for your VC system
     that calls it to deliver the Change.  There are other tools in
     the contrib/ directory that use the same protocol.


   As a quick guide, here is a list of VC systems and the
ChangeSources that might be useful with them. All of these
ChangeSources are in the `buildbot.changes' module.

`CVS'
        * freshcvs.FreshCVSSource (connected via TCP to the freshcvs
          daemon)

        * mail.FCMaildirSource (watching for email sent by a freshcvs
          daemon)

        * mail.BonsaiMaildirSource (watching for email sent by Bonsai)

        * mail.SyncmailMaildirSource (watching for email sent by
          syncmail)

        * pb.PBChangeSource (listening for connections from `buildbot
          sendchange' run in a loginfo script)

        * pb.PBChangeSource (listening for connections from a
          long-running `contrib/viewcvspoll.py' polling process which
          examines the ViewCVS database directly

`SVN'
        * pb.PBChangeSource (listening for connections from
          `contrib/svn_buildbot.py' run in a postcommit script)

        * pb.PBChangeSource (listening for connections from a
          long-running `contrib/svn_watcher.py' or
          `contrib/svnpoller.py' polling process

        * mail.SVNCommitEmailMaildirSource (watching for email sent
          by commit-email.pl)

        * svnpoller.SVNPoller (polling the SVN repository)

`Darcs'
        * pb.PBChangeSource (listening for connections from
          `contrib/darcs_buildbot.py' in a commit script

`Mercurial'
        * pb.PBChangeSource (listening for connections from
          `contrib/hg_buildbot.py' run in an 'incoming' hook)

        * pb.PBChangeSource (listening for connections from
          `buildbot/changes/hgbuildbot.py' run as an in-process
          'changegroup' hook)

`Arch/Bazaar'
        * pb.PBChangeSource (listening for connections from
          `contrib/arch_buildbot.py' run in a commit hook)

`Bzr (the newer Bazaar)'
        * pb.PBChangeSource (listening for connections from
          `contrib/bzr_buildbot.py' run in a post-change-branch-tip
          or commit hook)

        * `contrib/bzr_buildbot.py''s BzrPoller (polling the Bzr
          repository)

`Git'
        * pb.PBChangeSource (listening for connections from
          `contrib/git_buildbot.py' run in the post-receive hook)


   All VC systems can be driven by a PBChangeSource and the `buildbot
sendchange' tool run from some form of commit script.  If you write
an email parsing function, they can also all be driven by a suitable
`MaildirSource'.


File: buildbot.info,  Node: Choosing ChangeSources,  Next: CVSToys - PBService,  Prev: Change Sources,  Up: Getting Source Code Changes

5.2 Choosing ChangeSources
==========================

The `master.cfg' configuration file has a dictionary key named
`BuildmasterConfig['change_source']', which holds the active
`IChangeSource' object. The config file will typically create an
object from one of the classes described below and stuff it into this
key.

   Each buildmaster typically has just a single ChangeSource, since
it is only watching a single source tree. But if, for some reason,
you need multiple sources, just set `c['change_source']' to a list of
ChangeSources.. it will accept that too.

     s = FreshCVSSourceNewcred(host="host", port=4519,
                               user="alice", passwd="secret",
                               prefix="Twisted")
     BuildmasterConfig['change_source'] = [s]

   Each source tree has a nominal `top'. Each Change has a list of
filenames, which are all relative to this top location. The
ChangeSource is responsible for doing whatever is necessary to
accomplish this. Most sources have a `prefix' argument: a partial
pathname which is stripped from the front of all filenames provided to
that `ChangeSource'. Files which are outside this sub-tree are
ignored by the changesource: it does not generate Changes for those
files.


File: buildbot.info,  Node: CVSToys - PBService,  Next: Mail-parsing ChangeSources,  Prev: Choosing ChangeSources,  Up: Getting Source Code Changes

5.3 CVSToys - PBService
=======================

The CVSToys (http://purl.net/net/CVSToys) package provides a server
which runs on the machine that hosts the CVS repository it watches.
It has a variety of ways to distribute commit notifications, and
offers a flexible regexp-based way to filter out uninteresting
changes. One of the notification options is named `PBService' and
works by listening on a TCP port for clients. These clients subscribe
to hear about commit notifications.

   The buildmaster has a CVSToys-compatible `PBService' client built
in. There are two versions of it, one for old versions of CVSToys
(1.0.9 and earlier) which used the `oldcred' authentication
framework, and one for newer versions (1.0.10 and later) which use
`newcred'. Both are classes in the `buildbot.changes.freshcvs'
package.

   `FreshCVSSourceNewcred' objects are created with the following
parameters:

``host' and `port''
     these specify where the CVSToys server can be reached

``user' and `passwd''
     these specify the login information for the CVSToys server
     (`freshcvs'). These must match the server's values, which are
     defined in the `freshCfg' configuration file (which lives in the
     CVSROOT directory of the repository).

``prefix''
     this is the prefix to be found and stripped from filenames
     delivered by the CVSToys server. Most projects live in
     sub-directories of the main repository, as siblings of the
     CVSROOT sub-directory, so typically this prefix is set to that
     top sub-directory name.


Example
=======

To set up the freshCVS server, add a statement like the following to
your `freshCfg' file:

     pb = ConfigurationSet([
         (None, None, None, PBService(userpass=('foo', 'bar'), port=4519)),
         ])

   This will announce all changes to a client which connects to port
4519 using a username of 'foo' and a password of 'bar'.

   Then add a clause like this to your buildmaster's `master.cfg':

     BuildmasterConfig['change_source'] = FreshCVSSource("cvs.example.com", 4519,
                                                         "foo", "bar",
                                                         prefix="glib/")

   where "cvs.example.com" is the host that is running the FreshCVS
daemon, and "glib" is the top-level directory (relative to the
repository's root) where all your source code lives. Most projects
keep one or more projects in the same repository (along with CVSROOT/
to hold admin files like loginfo and freshCfg); the prefix= argument
tells the buildmaster to ignore everything outside that directory,
and to strip that common prefix from all pathnames it handles.


File: buildbot.info,  Node: Mail-parsing ChangeSources,  Next: PBChangeSource,  Prev: CVSToys - PBService,  Up: Getting Source Code Changes

5.4 Mail-parsing ChangeSources
==============================

Many projects publish information about changes to their source tree
by sending an email message out to a mailing list, frequently named
PROJECT-commits or PROJECT-changes. Each message usually contains a
description of the change (who made the change, which files were
affected) and sometimes a copy of the diff. Humans can subscribe to
this list to stay informed about what's happening to the source tree.

   The Buildbot can also be subscribed to a -commits mailing list, and
can trigger builds in response to Changes that it hears about. The
buildmaster admin needs to arrange for these email messages to arrive
in a place where the buildmaster can find them, and configure the
buildmaster to parse the messages correctly. Once that is in place,
the email parser will create Change objects and deliver them to the
Schedulers (see *note Change Sources and Schedulers::) just like any
other ChangeSource.

   There are two components to setting up an email-based ChangeSource.
The first is to route the email messages to the buildmaster, which is
done by dropping them into a "maildir". The second is to actually
parse the messages, which is highly dependent upon the tool that was
used to create them. Each VC system has a collection of favorite
change-emailing tools, and each has a slightly different format, so
each has a different parsing function. There is a separate
ChangeSource variant for each parsing function.

   Once you've chosen a maildir location and a parsing function,
create the change source and put it in `c['change_source']':

     from buildbot.changes.mail import SyncmailMaildirSource
     c['change_source'] = SyncmailMaildirSource("~/maildir-buildbot",
                                                prefix="/trunk/")

* Menu:

* Subscribing the Buildmaster::
* Using Maildirs::
* Parsing Email Change Messages::


File: buildbot.info,  Node: Subscribing the Buildmaster,  Next: Using Maildirs,  Prev: Mail-parsing ChangeSources,  Up: Mail-parsing ChangeSources

5.4.1 Subscribing the Buildmaster
---------------------------------

The recommended way to install the buildbot is to create a dedicated
account for the buildmaster. If you do this, the account will probably
have a distinct email address (perhaps <buildmaster@example.org>).
Then just arrange for this account's email to be delivered to a
suitable maildir (described in the next section).

   If the buildbot does not have its own account, "extension
addresses" can be used to distinguish between email intended for the
buildmaster and email intended for the rest of the account. In most
modern MTAs, the e.g. `foo@example.org' account has control over
every email address at example.org which begins with "foo", such that
email addressed to <account-foo@example.org> can be delivered to a
different destination than <account-bar@example.org>. qmail does this
by using separate .qmail files for the two destinations (`.qmail-foo'
and `.qmail-bar', with `.qmail' controlling the base address and
`.qmail-default' controlling all other extensions). Other MTAs have
similar mechanisms.

   Thus you can assign an extension address like
<foo-buildmaster@example.org> to the buildmaster, and retain
<foo@example.org> for your own use.


File: buildbot.info,  Node: Using Maildirs,  Next: Parsing Email Change Messages,  Prev: Subscribing the Buildmaster,  Up: Mail-parsing ChangeSources

5.4.2 Using Maildirs
--------------------

A "maildir" is a simple directory structure originally developed for
qmail that allows safe atomic update without locking. Create a base
directory with three subdirectories: "new", "tmp", and "cur".  When
messages arrive, they are put into a uniquely-named file (using pids,
timestamps, and random numbers) in "tmp". When the file is complete,
it is atomically renamed into "new". Eventually the buildmaster
notices the file in "new", reads and parses the contents, then moves
it into "cur". A cronjob can be used to delete files in "cur" at
leisure.

   Maildirs are frequently created with the `maildirmake' tool, but a
simple `mkdir -p ~/MAILDIR/{cur,new,tmp}' is pretty much equivalent.

   Many modern MTAs can deliver directly to maildirs. The usual
.forward or .procmailrc syntax is to name the base directory with a
trailing slash, so something like `~/MAILDIR/' . qmail and postfix are
maildir-capable MTAs, and procmail is a maildir-capable MDA (Mail
Delivery Agent).

   For MTAs which cannot put files into maildirs directly, the
"safecat" tool can be executed from a .forward file to accomplish the
same thing.

   The Buildmaster uses the linux DNotify facility to receive
immediate notification when the maildir's "new" directory has
changed. When this facility is not available, it polls the directory
for new messages, every 10 seconds by default.


File: buildbot.info,  Node: Parsing Email Change Messages,  Prev: Using Maildirs,  Up: Mail-parsing ChangeSources

5.4.3 Parsing Email Change Messages
-----------------------------------

The second component to setting up an email-based ChangeSource is to
parse the actual notices. This is highly dependent upon the VC system
and commit script in use.

   A couple of common tools used to create these change emails are:

`CVS'

    `CVSToys MailNotifier'
          *note FCMaildirSource::

    `Bonsai notification'
          *note BonsaiMaildirSource::

    `syncmail'
          *note SyncmailMaildirSource::

`SVN'

    `svnmailer'
          http://opensource.perlig.de/en/svnmailer/

    `commit-email.pl'
          *note SVNCommitEmailMaildirSource::

`Mercurial'

    `NotifyExtension'
          http://www.selenic.com/mercurial/wiki/index.cgi/NotifyExtension

`Git'

    `post-receive-email'
          http://git.kernel.org/?p=git/git.git;a=blob;f=contrib/hooks/post-receive-email;hb=HEAD


   The following sections describe the parsers available for each of
these tools.

   Most of these parsers accept a `prefix=' argument, which is used
to limit the set of files that the buildmaster pays attention to. This
is most useful for systems like CVS and SVN which put multiple
projects in a single repository (or use repository names to indicate
branches). Each filename that appears in the email is tested against
the prefix: if the filename does not start with the prefix, the file
is ignored. If the filename _does_ start with the prefix, that prefix
is stripped from the filename before any further processing is done.
Thus the prefix usually ends with a slash.

* Menu:

* FCMaildirSource::
* SyncmailMaildirSource::
* BonsaiMaildirSource::
* SVNCommitEmailMaildirSource::


File: buildbot.info,  Node: FCMaildirSource,  Next: SyncmailMaildirSource,  Prev: Parsing Email Change Messages,  Up: Parsing Email Change Messages

5.4.3.1 FCMaildirSource
.......................

http://twistedmatrix.com/users/acapnotic/wares/code/CVSToys/

   This parser works with the CVSToys `MailNotification' action,
which will send email to a list of recipients for each commit. This
tends to work better than using `/bin/mail' from within the
CVSROOT/loginfo file directly, as CVSToys will batch together all
files changed during the same CVS invocation, and can provide more
information (like creating a ViewCVS URL for each file changed).

   The Buildbot's `FCMaildirSource' knows for to parse these CVSToys
messages and turn them into Change objects. It can be given two
parameters: the directory name of the maildir root, and the prefix to
strip.

     from buildbot.changes.mail import FCMaildirSource
     c['change_source'] = FCMaildirSource("~/maildir-buildbot")


File: buildbot.info,  Node: SyncmailMaildirSource,  Next: BonsaiMaildirSource,  Prev: FCMaildirSource,  Up: Parsing Email Change Messages

5.4.3.2 SyncmailMaildirSource
.............................

http://sourceforge.net/projects/cvs-syncmail

   `SyncmailMaildirSource' knows how to parse the message format used
by the CVS "syncmail" script.

     from buildbot.changes.mail import SyncmailMaildirSource
     c['change_source'] = SyncmailMaildirSource("~/maildir-buildbot")


File: buildbot.info,  Node: BonsaiMaildirSource,  Next: SVNCommitEmailMaildirSource,  Prev: SyncmailMaildirSource,  Up: Parsing Email Change Messages

5.4.3.3 BonsaiMaildirSource
...........................

http://www.mozilla.org/bonsai.html

   `BonsaiMaildirSource' parses messages sent out by Bonsai, the CVS
tree-management system built by Mozilla.

     from buildbot.changes.mail import BonsaiMaildirSource
     c['change_source'] = BonsaiMaildirSource("~/maildir-buildbot")


File: buildbot.info,  Node: SVNCommitEmailMaildirSource,  Prev: BonsaiMaildirSource,  Up: Parsing Email Change Messages

5.4.3.4 SVNCommitEmailMaildirSource
...................................

`SVNCommitEmailMaildirSource' parses message sent out by the
`commit-email.pl' script, which is included in the Subversion
distribution.

   It does not currently handle branches: all of the Change objects
that it creates will be associated with the default (i.e. trunk)
branch.

     from buildbot.changes.mail import SVNCommitEmailMaildirSource
     c['change_source'] = SVNCommitEmailMaildirSource("~/maildir-buildbot")


File: buildbot.info,  Node: PBChangeSource,  Next: P4Source,  Prev: Mail-parsing ChangeSources,  Up: Getting Source Code Changes

5.5 PBChangeSource
==================

The last kind of ChangeSource actually listens on a TCP port for
clients to connect and push change notices _into_ the Buildmaster.
This is used by the built-in `buildbot sendchange' notification tool,
as well as the VC-specific `contrib/svn_buildbot.py',
`contrib/arch_buildbot.py', `contrib/hg_buildbot.py' tools, and the
`buildbot.changes.hgbuildbot' hook. These tools are run by the
repository (in a commit hook script), and connect to the buildmaster
directly each time a file is comitted. This is also useful for
creating new kinds of change sources that work on a `push' model
instead of some kind of subscription scheme, for example a script
which is run out of an email .forward file.

   This ChangeSource can be configured to listen on its own TCP port,
or it can share the port that the buildmaster is already using for the
buildslaves to connect. (This is possible because the
`PBChangeSource' uses the same protocol as the buildslaves, and they
can be distinguished by the `username' attribute used when the
initial connection is established). It might be useful to have it
listen on a different port if, for example, you wanted to establish
different firewall rules for that port. You could allow only the SVN
repository machine access to the `PBChangeSource' port, while
allowing only the buildslave machines access to the slave port. Or you
could just expose one port and run everything over it. _Note: this
feature is not yet implemented, the PBChangeSource will always share
the slave port and will always have a `user' name of `change', and a
passwd of `changepw'. These limitations will be removed in the
future._.

   The `PBChangeSource' is created with the following arguments. All
are optional.

``port''
     which port to listen on. If `None' (which is the default), it
     shares the port used for buildslave connections. _Not
     Implemented, always set to `None'_.

``user' and `passwd''
     The user/passwd account information that the client program must
     use to connect. Defaults to `change' and `changepw'. _Not
     Implemented, `user' is currently always set to `change',
     `passwd' is always set to `changepw'_.

``prefix''
     The prefix to be found and stripped from filenames delivered
     over the connection. Any filenames which do not start with this
     prefix will be removed. If all the filenames in a given Change
     are removed, the that whole Change will be dropped. This string
     should probably end with a directory separator.

     This is useful for changes coming from version control systems
     that represent branches as parent directories within the
     repository (like SVN and Perforce). Use a prefix of 'trunk/' or
     'project/branches/foobranch/' to only follow one branch and to
     get correct tree-relative filenames. Without a prefix, the
     PBChangeSource will probably deliver Changes with filenames like
     `trunk/foo.c' instead of just `foo.c'. Of course this also
     depends upon the tool sending the Changes in (like `buildbot
     sendchange') and what filenames it is delivering: that tool may
     be filtering and stripping prefixes at the sending end.



File: buildbot.info,  Node: P4Source,  Next: BonsaiPoller,  Prev: PBChangeSource,  Up: Getting Source Code Changes

5.6 P4Source
============

The `P4Source' periodically polls a Perforce
(http://www.perforce.com/) depot for changes. It accepts the
following arguments:

``p4base''
     The base depot path to watch, without the trailing '/...'.

``p4port''
     The Perforce server to connect to (as host:port).

``p4user''
     The Perforce user.

``p4passwd''
     The Perforce password.

``p4bin''
     An optional string parameter. Specify the location of the
     perforce command line binary (p4).  You only need to do this if
     the perforce binary is not in the path of the buildbot user.
     Defaults to "p4".

``split_file''
     A function that maps a pathname, without the leading `p4base',
     to a (branch, filename) tuple. The default just returns (None,
     branchfile), which effectively disables branch support. You
     should supply a function which understands your repository
     structure.

``pollinterval''
     How often to poll, in seconds. Defaults to 600 (10 minutes).

``histmax''
     The maximum number of changes to inspect at a time. If more than
     this number occur since the last poll, older changes will be
     silently ignored.

Example
=======

This configuration uses the `P4PORT', `P4USER', and `P4PASSWD'
specified in the buildmaster's environment. It watches a project in
which the branch name is simply the next path component, and the file
is all path components after.

     import buildbot.changes.p4poller
     s = p4poller.P4Source(p4base='//depot/project/',
                           split_file=lambda branchfile: branchfile.split('/',1),
                          )
     c['change_source'] = s


File: buildbot.info,  Node: BonsaiPoller,  Next: SVNPoller,  Prev: P4Source,  Up: Getting Source Code Changes

5.7 BonsaiPoller
================

The `BonsaiPoller' periodically polls a Bonsai server. This is a CGI
script accessed through a web server that provides information about
a CVS tree, for example the Mozilla bonsai server at
`http://bonsai.mozilla.org'. Bonsai servers are usable by both humans
and machines. In this case, the buildbot's change source forms a
query which asks about any files in the specified branch which have
changed since the last query.

   Please take a look at the BonsaiPoller docstring for details about
the arguments it accepts.


File: buildbot.info,  Node: SVNPoller,  Next: MercurialHook,  Prev: BonsaiPoller,  Up: Getting Source Code Changes

5.8 SVNPoller
=============

The `buildbot.changes.svnpoller.SVNPoller' is a ChangeSource which
periodically polls a Subversion (http://subversion.tigris.org/)
repository for new revisions, by running the `svn log' command in a
subshell. It can watch a single branch or multiple branches.

   `SVNPoller' accepts the following arguments:

`svnurl'
     The base URL path to watch, like
     `svn://svn.twistedmatrix.com/svn/Twisted/trunk', or
     `http://divmod.org/svn/Divmod/', or even
     `file:///home/svn/Repository/ProjectA/branches/1.5/'. This must
     include the access scheme, the location of the repository (both
     the hostname for remote ones, and any additional directory names
     necessary to get to the repository), and the sub-path within the
     repository's virtual filesystem for the project and branch of
     interest.

     The `SVNPoller' will only pay attention to files inside the
     subdirectory specified by the complete svnurl.

`split_file'
     A function to convert pathnames into (branch, relative_pathname)
     tuples. Use this to explain your repository's branch-naming
     policy to `SVNPoller'. This function must accept a single string
     and return a two-entry tuple. There are a few utility functions
     in `buildbot.changes.svnpoller' that can be used as a
     `split_file' function, see below for details.

     The default value always returns (None, path), which indicates
     that all files are on the trunk.

     Subclasses of `SVNPoller' can override the `split_file' method
     instead of using the `split_file=' argument.

`svnuser'
     An optional string parameter. If set, the `--user' argument will
     be added to all `svn' commands. Use this if you have to
     authenticate to the svn server before you can do `svn info' or
     `svn log' commands.

`svnpasswd'
     Like `svnuser', this will cause a `--password' argument to be
     passed to all svn commands.

`pollinterval'
     How often to poll, in seconds. Defaults to 600 (checking once
     every 10 minutes). Lower this if you want the buildbot to notice
     changes faster, raise it if you want to reduce the network and
     CPU load on your svn server. Please be considerate of public SVN
     repositories by using a large interval when polling them.

`histmax'
     The maximum number of changes to inspect at a time. Every
     POLLINTERVAL seconds, the `SVNPoller' asks for the last HISTMAX
     changes and looks through them for any ones it does not already
     know about. If more than HISTMAX revisions have been committed
     since the last poll, older changes will be silently ignored.
     Larger values of histmax will cause more time and memory to be
     consumed on each poll attempt.  `histmax' defaults to 100.

`svnbin'
     This controls the `svn' executable to use. If subversion is
     installed in a weird place on your system (outside of the
     buildmaster's `$PATH'), use this to tell `SVNPoller' where to
     find it. The default value of "svn" will almost always be
     sufficient.


Branches
========

Each source file that is tracked by a Subversion repository has a
fully-qualified SVN URL in the following form:
(REPOURL)(PROJECT-plus-BRANCH)(FILEPATH). When you create the
`SVNPoller', you give it a `svnurl' value that includes all of the
REPOURL and possibly some portion of the PROJECT-plus-BRANCH string.
The `SVNPoller' is responsible for producing Changes that contain a
branch name and a FILEPATH (which is relative to the top of a
checked-out tree). The details of how these strings are split up
depend upon how your repository names its branches.

PROJECT/BRANCHNAME/FILEPATH repositories
----------------------------------------

One common layout is to have all the various projects that share a
repository get a single top-level directory each. Then under a given
project's directory, you get two subdirectories, one named "trunk"
and another named "branches". Under "branches" you have a bunch of
other directories, one per branch, with names like "1.5.x" and
"testing". It is also common to see directories like "tags" and
"releases" next to "branches" and "trunk".

   For example, the Twisted project has a subversion server on
"svn.twistedmatrix.com" that hosts several sub-projects. The
repository is available through a SCHEME of "svn:". The primary
sub-project is Twisted, of course, with a repository root of
"svn://svn.twistedmatrix.com/svn/Twisted". Another sub-project is
Informant, with a root of
"svn://svn.twistedmatrix.com/svn/Informant", etc. Inside any
checked-out Twisted tree, there is a file named bin/trial (which is
used to run unit test suites).

   The trunk for Twisted is in
"svn://svn.twistedmatrix.com/svn/Twisted/trunk", and the
fully-qualified SVN URL for the trunk version of `trial' would be
"svn://svn.twistedmatrix.com/svn/Twisted/trunk/bin/trial". The same
SVNURL for that file on a branch named "1.5.x" would be
"svn://svn.twistedmatrix.com/svn/Twisted/branches/1.5.x/bin/trial".

   To set up a `SVNPoller' that watches the Twisted trunk (and
nothing else), we would use the following:

     from buildbot.changes.svnpoller import SVNPoller
     c['change_source'] = SVNPoller("svn://svn.twistedmatrix.com/svn/Twisted/trunk")

   In this case, every Change that our `SVNPoller' produces will have
`.branch=None', to indicate that the Change is on the trunk.  No
other sub-projects or branches will be tracked.

   If we want our ChangeSource to follow multiple branches, we have
to do two things. First we have to change our `svnurl=' argument to
watch more than just ".../Twisted/trunk". We will set it to
".../Twisted" so that we'll see both the trunk and all the branches.
Second, we have to tell `SVNPoller' how to split the
(PROJECT-plus-BRANCH)(FILEPATH) strings it gets from the repository
out into (BRANCH) and (FILEPATH) pairs.

   We do the latter by providing a "split_file" function. This
function is responsible for splitting something like
"branches/1.5.x/bin/trial" into `branch'="branches/1.5.x" and
`filepath'="bin/trial". This function is always given a string that
names a file relative to the subdirectory pointed to by the
`SVNPoller''s `svnurl=' argument. It is expected to return a
(BRANCHNAME, FILEPATH) tuple (in which FILEPATH is relative to the
branch indicated), or None to indicate that the file is outside any
project of interest.

   (note that we want to see "branches/1.5.x" rather than just
"1.5.x" because when we perform the SVN checkout, we will probably
append the branch name to the baseURL, which requires that we keep the
"branches" component in there. Other VC schemes use a different
approach towards branches and may not require this artifact.)

   If your repository uses this same PROJECT/BRANCH/FILEPATH naming
scheme, the following function will work:

     def split_file_branches(path):
         pieces = path.split('/')
         if pieces[0] == 'trunk':
             return (None, '/'.join(pieces[1:]))
         elif pieces[0] == 'branches':
             return ('/'.join(pieces[0:2]),
                     '/'.join(pieces[2:]))
         else:
             return None

   This function is provided as
`buildbot.changes.svnpoller.split_file_branches' for your
convenience. So to have our Twisted-watching `SVNPoller' follow
multiple branches, we would use this:

     from buildbot.changes.svnpoller import SVNPoller, split_file_branches
     c['change_source'] = SVNPoller("svn://svn.twistedmatrix.com/svn/Twisted",
                                    split_file=split_file_branches)

   Changes for all sorts of branches (with names like
"branches/1.5.x", and None to indicate the trunk) will be delivered
to the Schedulers.  Each Scheduler is then free to use or ignore each
branch as it sees fit.

BRANCHNAME/PROJECT/FILEPATH repositories
----------------------------------------

Another common way to organize a Subversion repository is to put the
branch name at the top, and the projects underneath. This is
especially frequent when there are a number of related sub-projects
that all get released in a group.

   For example, Divmod.org hosts a project named "Nevow" as well as
one named "Quotient". In a checked-out Nevow tree there is a directory
named "formless" that contains a python source file named
"webform.py". This repository is accessible via webdav (and thus uses
an "http:" scheme) through the divmod.org hostname. There are many
branches in this repository, and they use a (BRANCHNAME)/(PROJECT)
naming policy.

   The fully-qualified SVN URL for the trunk version of webform.py is
`http://divmod.org/svn/Divmod/trunk/Nevow/formless/webform.py'.  You
can do an `svn co' with that URL and get a copy of the latest
version. The 1.5.x branch version of this file would have a URL of
`http://divmod.org/svn/Divmod/branches/1.5.x/Nevow/formless/webform.py'.
The whole Nevow trunk would be checked out with
`http://divmod.org/svn/Divmod/trunk/Nevow', while the Quotient trunk
would be checked out using
`http://divmod.org/svn/Divmod/trunk/Quotient'.

   Now suppose we want to have an `SVNPoller' that only cares about
the Nevow trunk. This case looks just like the PROJECT/BRANCH layout
described earlier:

     from buildbot.changes.svnpoller import SVNPoller
     c['change_source'] = SVNPoller("http://divmod.org/svn/Divmod/trunk/Nevow")

   But what happens when we want to track multiple Nevow branches? We
have to point our `svnurl=' high enough to see all those branches,
but we also don't want to include Quotient changes (since we're only
building Nevow). To accomplish this, we must rely upon the
`split_file' function to help us tell the difference between files
that belong to Nevow and those that belong to Quotient, as well as
figuring out which branch each one is on.

     from buildbot.changes.svnpoller import SVNPoller
     c['change_source'] = SVNPoller("http://divmod.org/svn/Divmod",
                                    split_file=my_file_splitter)

   The `my_file_splitter' function will be called with
repository-relative pathnames like:

`trunk/Nevow/formless/webform.py'
     This is a Nevow file, on the trunk. We want the Change that
     includes this to see a filename of `formless/webform.py"', and a
     branch of None

`branches/1.5.x/Nevow/formless/webform.py'
     This is a Nevow file, on a branch. We want to get
     branch="branches/1.5.x" and filename="formless/webform.py".

`trunk/Quotient/setup.py'
     This is a Quotient file, so we want to ignore it by having
     `my_file_splitter' return None.

`branches/1.5.x/Quotient/setup.py'
     This is also a Quotient file, which should be ignored.

   The following definition for `my_file_splitter' will do the job:

     def my_file_splitter(path):
         pieces = path.split('/')
         if pieces[0] == 'trunk':
             branch = None
             pieces.pop(0) # remove 'trunk'
         elif pieces[0] == 'branches':
             pieces.pop(0) # remove 'branches'
             # grab branch name
             branch = 'branches/' + pieces.pop(0)
         else:
             return None # something weird
         projectname = pieces.pop(0)
         if projectname != 'Nevow':
             return None # wrong project
         return (branch, '/'.join(pieces))


File: buildbot.info,  Node: MercurialHook,  Next: Bzr Hook,  Prev: SVNPoller,  Up: Getting Source Code Changes

5.9 MercurialHook
=================

Since Mercurial is written in python, the hook script can invoke
Buildbot's `sendchange' function directly, rather than having to
spawn an external process. This function delivers the same sort of
changes as `buildbot sendchange' and the various hook scripts in
contrib/, so you'll need to add a `pb.PBChangeSource' to your
buildmaster to receive these changes.

   To set this up, first choose a Mercurial repository that represents
your central "official" source tree. This will be the same repository
that your buildslaves will eventually pull from. Install Buildbot on
the machine that hosts this repository, using the same version of
python as Mercurial is using (so that the Mercurial hook can import
code from buildbot). Then add the following to the `.hg/hgrc' file in
that repository, replacing the buildmaster hostname/portnumber as
appropriate for your buildbot:

     [hooks]
     changegroup.buildbot = python:buildbot.changes.hgbuildbot.hook

     [hgbuildbot]
     master = buildmaster.example.org:9987

   (Note that Mercurial lets you define multiple `changegroup' hooks
by giving them distinct names, like `changegroup.foo' and
`changegroup.bar', which is why we use `changegroup.buildbot' in this
example. There is nothing magical about the "buildbot" suffix in the
hook name. The `[hgbuildbot]' section _is_ special, however, as it is
the only section that the buildbot hook pays attention to.)

   Also note that this runs as a `changegroup' hook, rather than as
an `incoming' hook. The `changegroup' hook is run with multiple
revisions at a time (say, if multiple revisions are being pushed to
this repository in a single `hg push' command), whereas the
`incoming' hook is run with just one revision at a time. The
`hgbuildbot.hook' function will only work with the `changegroup' hook.

   The `[hgbuildbot]' section has two other parameters that you might
specify, both of which control the name of the branch that is
attached to the changes coming from this hook.

   One common branch naming policy for Mercurial repositories is to
use it just like Darcs: each branch goes into a separate repository,
and all the branches for a single project share a common parent
directory.  For example, you might have `/var/repos/PROJECT/trunk/'
and `/var/repos/PROJECT/release'. To use this style, use the
`branchtype = dirname' setting, which simply uses the last component
of the repository's enclosing directory as the branch name:

     [hgbuildbot]
     master = buildmaster.example.org:9987
     branchtype = dirname

   Another approach is to use Mercurial's built-in branches (the kind
created with `hg branch' and listed with `hg branches'). This feature
associates persistent names with particular lines of descent within a
single repository. (note that the buildbot `source.Mercurial'
checkout step does not yet support this kind of branch). To have the
commit hook deliver this sort of branch name with the Change object,
use `branchtype = inrepo':

     [hgbuildbot]
     master = buildmaster.example.org:9987
     branchtype = inrepo

   Finally, if you want to simply specify the branchname directly, for
all changes, use `branch = BRANCHNAME'. This overrides `branchtype':

     [hgbuildbot]
     master = buildmaster.example.org:9987
     branch = trunk

   If you use `branch=' like this, you'll need to put a separate
.hgrc in each repository. If you use `branchtype=', you may be able
to use the same .hgrc for all your repositories, stored in `~/.hgrc'
or `/etc/mercurial/hgrc'.


File: buildbot.info,  Node: Bzr Hook,  Next: Bzr Poller,  Prev: MercurialHook,  Up: Getting Source Code Changes

5.10 Bzr Hook
=============

Bzr is also written in Python, and the Bzr hook depends on Twisted to
send the changes.

   To install, put `contrib/bzr_buildbot.py' in one of your plugins
locations a bzr plugins directory (e.g., `~/.bazaar/plugins'). Then,
in one of your bazaar conf files (e.g., `~/.bazaar/locations.conf'),
set the location you want to connect with buildbot with these keys:

`buildbot_on'
     one of 'commit', 'push, or 'change'. Turns the plugin on to
     report changes via commit, changes via push, or any changes to
     the trunk. 'change' is recommended.

`buildbot_server'
     (required to send to a buildbot master) the URL of the buildbot
     master to which you will connect (as of this writing, the same
     server and port to which slaves connect).

`buildbot_port'
     (optional, defaults to 9989) the port of the buildbot master to
     which you will connect (as of this writing, the same server and
     port to which slaves connect)

`buildbot_pqm'
     (optional, defaults to not pqm) Normally, the user that commits
     the revision is the user that is responsible for the change.
     When run in a pqm (Patch Queue Manager, see
     https://launchpad.net/pqm) environment, the user that commits is
     the Patch Queue Manager, and the user that committed the
     *parent* revision is responsible for the change. To turn on the
     pqm mode, set this value to any of (case-insensitive) "Yes",
     "Y", "True", or "T".

`buildbot_dry_run'
     (optional, defaults to not a dry run) Normally, the post-commit
     hook will attempt to communicate with the configured buildbot
     server and port. If this parameter is included and any of
     (case-insensitive) "Yes", "Y", "True", or "T", then the hook
     will simply print what it would have sent, but not attempt to
     contact the buildbot master.

`buildbot_send_branch_name'
     (optional, defaults to not sending the branch name) If your
     buildbot's bzr source build step uses a repourl, do *not* turn
     this on. If your buildbot's bzr build step uses a baseURL, then
     you may set this value to any of (case-insensitive) "Yes", "Y",
     "True", or "T" to have the buildbot master append the branch
     name to the baseURL.


   When buildbot no longer has a hardcoded password, it will be a
configuration option here as well.

   Here's a simple example that you might have in your
`~/.bazaar/locations.conf'.

     [chroot-*:///var/local/myrepo/mybranch]
     buildbot_on = change
     buildbot_server = localhost


File: buildbot.info,  Node: Bzr Poller,  Prev: Bzr Hook,  Up: Getting Source Code Changes

5.11 Bzr Poller
===============

If you cannot insert a Bzr hook in the server, you can use the Bzr
Poller. To use, put `contrib/bzr_buildbot.py' somewhere that your
buildbot configuration can import it. Even putting it in the same
directory as the master.cfg should work. Install the poller in the
buildbot configuration as with any other change source. Minimally,
provide a URL that you want to poll (bzr://, bzr+ssh://, or lp:),
though make sure the buildbot user has necessary privileges. You may
also want to specify these optional values.

`poll_interval'
     The number of seconds to wait between polls.  Defaults to 10
     minutes.

`branch_name'
     Any value to be used as the branch name. Defaults to None, or
     specify a string, or specify the constants from
     `bzr_buildbot.py' SHORT or FULL to get the short branch name or
     full branch address.

`blame_merge_author'
     normally, the user that commits the revision is the user that is
     responsible for the change. When run in a pqm (Patch Queue
     Manager, see https://launchpad.net/pqm) environment, the user
     that commits is the Patch Queue Manager, and the user that
     committed the merged, *parent* revision is responsible for the
     change. set this value to True if this is pointed against a
     PQM-managed branch.


File: buildbot.info,  Node: Build Process,  Next: Status Delivery,  Prev: Getting Source Code Changes,  Up: Top

6 Build Process
***************

A `Build' object is responsible for actually performing a build.  It
gets access to a remote `SlaveBuilder' where it may run commands, and
a `BuildStatus' object where it must emit status events. The `Build'
is created by the Builder's `BuildFactory'.

   The default `Build' class is made up of a fixed sequence of
`BuildSteps', executed one after another until all are complete (or
one of them indicates that the build should be halted early). The
default `BuildFactory' creates instances of this `Build' class with a
list of `BuildSteps', so the basic way to configure the build is to
provide a list of `BuildSteps' to your `BuildFactory'.

   More complicated `Build' subclasses can make other decisions:
execute some steps only if certain files were changed, or if certain
previous steps passed or failed. The base class has been written to
allow users to express basic control flow without writing code, but
you can always subclass and customize to achieve more specialized
behavior.

* Menu:

* Build Steps::
* Interlocks::
* Build Factories::


File: buildbot.info,  Node: Build Steps,  Next: Interlocks,  Prev: Build Process,  Up: Build Process

6.1 Build Steps
===============

`BuildStep's are usually specified in the buildmaster's configuration
file, in a list that goes into the `BuildFactory'.  The `BuildStep'
instances in this list are used as templates to construct new
independent copies for each build (so that state can be kept on the
`BuildStep' in one build without affecting a later build). Each
`BuildFactory' can be created with a list of steps, or the factory
can be created empty and then steps added to it using the `addStep'
method:

     from buildbot.steps import source, shell
     from buildbot.process import factory

     f = factory.BuildFactory()
     f.addStep(source.SVN(svnurl="http://svn.example.org/Trunk/"))
     f.addStep(shell.ShellCommand(command=["make", "all"]))
     f.addStep(shell.ShellCommand(command=["make", "test"]))

   In earlier versions (0.7.5 and older), these steps were specified
with a tuple of (step_class, keyword_arguments). Steps can still be
specified this way, but the preferred form is to pass actual
`BuildStep' instances to `addStep', because that gives the
`BuildStep' class a chance to do some validation on the arguments.

   If you have a common set of steps which are used in several
factories, the `addSteps' method may be handy.  It takes an iterable
of `BuildStep' instances.

     setup_steps = [
         source.SVN(svnurl="http://svn.example.org/Trunk/")
         shell.ShellCommand(command="./setup")
     ]
     quick = factory.BuildFactory()
     quick.addSteps(setup_steps)
     quick.addStep(shell.shellCommand(command="make quick"))

   The rest of this section lists all the standard BuildStep objects
available for use in a Build, and the parameters which can be used to
control each.

* Menu:

* Common Parameters::
* Using Build Properties::
* Source Checkout::
* ShellCommand::
* Simple ShellCommand Subclasses::
* Python BuildSteps::
* Transferring Files::
* Steps That Run on the Master::
* Triggering Schedulers::
* Writing New BuildSteps::


File: buildbot.info,  Node: Common Parameters,  Next: Using Build Properties,  Prev: Build Steps,  Up: Build Steps

6.1.1 Common Parameters
-----------------------

The standard `Build' runs a series of `BuildStep's in order, only
stopping when it runs out of steps or if one of them requests that
the build be halted. It collects status information from each one to
create an overall build status (of SUCCESS, WARNINGS, or FAILURE).

   All BuildSteps accept some common parameters. Some of these control
how their individual status affects the overall build. Others are used
to specify which `Locks' (see *note Interlocks::) should be acquired
before allowing the step to run.

   Arguments common to all `BuildStep' subclasses:

`name'
     the name used to describe the step on the status display. It is
     also used to give a name to any LogFiles created by this step.

`haltOnFailure'
     if True, a FAILURE of this build step will cause the build to
     halt immediately. Steps with `alwaysRun=True' are still run.
     Generally speaking, haltOnFailure implies flunkOnFailure (the
     default for most BuildSteps). In some cases, particularly series
     of tests, it makes sense to haltOnFailure if something fails
     early on but not flunkOnFailure.  This can be achieved with
     haltOnFailure=True, flunkOnFailure=False.

`flunkOnWarnings'
     when True, a WARNINGS or FAILURE of this build step will mark the
     overall build as FAILURE. The remaining steps will still be
     executed.

`flunkOnFailure'
     when True, a FAILURE of this build step will mark the overall
     build as a FAILURE. The remaining steps will still be executed.

`warnOnWarnings'
     when True, a WARNINGS or FAILURE of this build step will mark the
     overall build as having WARNINGS. The remaining steps will still
     be executed.

`warnOnFailure'
     when True, a FAILURE of this build step will mark the overall
     build as having WARNINGS. The remaining steps will still be
     executed.

`alwaysRun'
     if True, this build step will always be run, even if a previous
     buildstep with `haltOnFailure=True' has failed.

`locks'
     a list of Locks (instances of `buildbot.locks.SlaveLock' or
     `buildbot.locks.MasterLock') that should be acquired before
     starting this Step. The Locks will be released when the step is
     complete. Note that this is a list of actual Lock instances, not
     names. Also note that all Locks must have unique names.



File: buildbot.info,  Node: Using Build Properties,  Next: Source Checkout,  Prev: Common Parameters,  Up: Build Steps

6.1.2 Using Build Properties
----------------------------

Build properties are a generalized way to provide configuration
information to build steps; see *note Build Properties::.

   Some build properties are inherited from external sources - global
properties, schedulers, or buildslaves.  Some build properties are
set when the build starts, such as the SourceStamp information. Other
properties can be set by BuildSteps as they run, for example the
various Source steps will set the `got_revision' property to the
source revision that was actually checked out (which can be useful
when the SourceStamp in use merely requested the "latest revision":
`got_revision' will tell you what was actually built).

   In custom BuildSteps, you can get and set the build properties with
the `getProperty'/`setProperty' methods. Each takes a string for the
name of the property, and returns or accepts an arbitrary(1) object.
For example:

     class MakeTarball(ShellCommand):
         def start(self):
             if self.getProperty("os") == "win":
                 self.setCommand([ ... ]) # windows-only command
             else:
                 self.setCommand([ ... ]) # equivalent for other systems
             ShellCommand.start(self)

WithProperties
==============

You can use build properties in ShellCommands by using the
`WithProperties' wrapper when setting the arguments of the
ShellCommand. This interpolates the named build properties into the
generated shell command.  Most step parameters accept
`WithProperties'.  Please file bugs for any parameters which do not.

     from buildbot.steps.shell import ShellCommand
     from buildbot.process.properties import WithProperties

     f.addStep(ShellCommand(
               command=["tar", "czf",
                        WithProperties("build-%s.tar.gz", "revision"),
                        "source"]))

   If this BuildStep were used in a tree obtained from Subversion, it
would create a tarball with a name like `build-1234.tar.gz'.

   The `WithProperties' function does `printf'-style string
interpolation, using strings obtained by calling
`build.getProperty(propname)'. Note that for every `%s' (or `%d',
etc), you must have exactly one additional argument to indicate which
build property you want to insert.

   You can also use python dictionary-style string interpolation by
using the `%(propname)s' syntax. In this form, the property name goes
in the parentheses, and WithProperties takes _no_ additional
arguments:

     f.addStep(ShellCommand(
               command=["tar", "czf",
                        WithProperties("build-%(revision)s.tar.gz"),
                        "source"]))

   Don't forget the extra "s" after the closing parenthesis! This is
the cause of many confusing errors.

   The dictionary-style interpolation supports a number of more
advanced syntaxes, too.

`propname:-replacement'
     If `propname' exists, substitute its value; otherwise,
     substitute `replacement'.  `replacement' may be empty
     (`%(propname:-)s')

`propname:+replacement'
     If `propname' exists, substitute `replacement'; otherwise,
     substitute an empty string.


   Although these are similar to shell substitutions, no other
substitutions are currently supported, and `replacement' in the above
cannot contain more substitutions.

   Note: like python, you can either do positional-argument
interpolation _or_ keyword-argument interpolation, not both. Thus you
cannot use a string like `WithProperties("foo-%(revision)s-%s",
"branch")'.

Common Build Properties
=======================

The following build properties are set when the build is started, and
are available to all steps.

`branch'
     This comes from the build's SourceStamp, and describes which
     branch is being checked out. This will be `None' (which
     interpolates into `WithProperties' as an empty string) if the
     build is on the default branch, which is generally the trunk.
     Otherwise it will be a string like "branches/beta1.4". The exact
     syntax depends upon the VC system being used.

`revision'
     This also comes from the SourceStamp, and is the revision of the
     source code tree that was requested from the VC system. When a
     build is requested of a specific revision (as is generally the
     case when the build is triggered by Changes), this will contain
     the revision specification. This is always a string, although
     the syntax depends upon the VC system in use: for SVN it is an
     integer, for Mercurial it is a short string, for Darcs it is a
     rather large string, etc.

     If the "force build" button was pressed, the revision will be
     `None', which means to use the most recent revision available.
     This is a "trunk build". This will be interpolated as an empty
     string.

`got_revision'
     This is set when a Source step checks out the source tree, and
     provides the revision that was actually obtained from the VC
     system.  In general this should be the same as `revision',
     except for trunk builds, where `got_revision' indicates what
     revision was current when the checkout was performed. This can
     be used to rebuild the same source code later.

     Note that for some VC systems (Darcs in particular), the
     revision is a large string containing newlines, and is not
     suitable for interpolation into a filename.

`buildername'
     This is a string that indicates which Builder the build was a
     part of.  The combination of buildername and buildnumber
     uniquely identify a build.

`buildnumber'
     Each build gets a number, scoped to the Builder (so the first
     build performed on any given Builder will have a build number of
     0). This integer property contains the build's number.

`slavename'
     This is a string which identifies which buildslave the build is
     running on.

`scheduler'
     If the build was started from a scheduler, then this property
     will contain the name of that scheduler.


   ---------- Footnotes ----------

   (1) Build properties are serialized along with the build results,
so they must be serializable. For this reason, the value of any build
property should be simple inert data: strings, numbers, lists,
tuples, and dictionaries. They should not contain class instances.


File: buildbot.info,  Node: Source Checkout,  Next: ShellCommand,  Prev: Using Build Properties,  Up: Build Steps

6.1.3 Source Checkout
---------------------

The first step of any build is typically to acquire the source code
from which the build will be performed. There are several classes to
handle this, one for each of the different source control system that
Buildbot knows about. For a description of how Buildbot treats source
control in general, see *note Version Control Systems::.

   All source checkout steps accept some common parameters to control
how they get the sources and where they should be placed. The
remaining per-VC-system parameters are mostly to specify where
exactly the sources are coming from.

`mode'
     a string describing the kind of VC operation that is desired.
     Defaults to `update'.

    `update'
          specifies that the CVS checkout/update should be performed
          directly into the workdir. Each build is performed in the
          same directory, allowing for incremental builds. This
          minimizes disk space, bandwidth, and CPU time. However, it
          may encounter problems if the build process does not handle
          dependencies properly (sometimes you must do a "clean
          build" to make sure everything gets compiled), or if source
          files are deleted but generated files can influence test
          behavior (e.g. python's .pyc files), or when source
          directories are deleted but generated files prevent CVS
          from removing them. Builds ought to be correct regardless
          of whether they are done "from scratch" or incrementally,
          but it is useful to test both kinds: this mode exercises the
          incremental-build style.

    `copy'
          specifies that the CVS workspace should be maintained in a
          separate directory (called the 'copydir'), using checkout
          or update as necessary. For each build, a new workdir is
          created with a copy of the source tree (rm -rf workdir; cp
          -r copydir workdir). This doubles the disk space required,
          but keeps the bandwidth low (update instead of a full
          checkout). A full 'clean' build is performed each time. This
          avoids any generated-file build problems, but is still
          occasionally vulnerable to CVS problems such as a
          repository being manually rearranged, causing CVS errors on
          update which are not an issue with a full checkout.

    `clobber'
          specifes that the working directory should be deleted each
          time, necessitating a full checkout for each build. This
          insures a clean build off a complete checkout, avoiding any
          of the problems described above. This mode exercises the
          "from-scratch" build style.

    `export'
          this is like `clobber', except that the 'cvs export'
          command is used to create the working directory. This
          command removes all CVS metadata files (the CVS/
          directories) from the tree, which is sometimes useful for
          creating source tarballs (to avoid including the metadata
          in the tar file).

`workdir'
     like all Steps, this indicates the directory where the build
     will take place. Source Steps are special in that they perform
     some operations outside of the workdir (like creating the
     workdir itself).

`alwaysUseLatest'
     if True, bypass the usual "update to the last Change" behavior,
     and always update to the latest changes instead.

`retry'
     If set, this specifies a tuple of `(delay, repeats)' which means
     that when a full VC checkout fails, it should be retried up to
     REPEATS times, waiting DELAY seconds between attempts. If you
     don't provide this, it defaults to `None', which means VC
     operations should not be retried. This is provided to make life
     easier for buildslaves which are stuck behind poor network
     connections.


   My habit as a developer is to do a `cvs update' and `make' each
morning. Problems can occur, either because of bad code being checked
in, or by incomplete dependencies causing a partial rebuild to fail
where a complete from-scratch build might succeed. A quick Builder
which emulates this incremental-build behavior would use the
`mode='update'' setting.

   On the other hand, other kinds of dependency problems can cause a
clean build to fail where a partial build might succeed. This
frequently results from a link step that depends upon an object file
that was removed from a later version of the tree: in the partial
tree, the object file is still around (even though the Makefiles no
longer know how to create it).

   "official" builds (traceable builds performed from a known set of
source revisions) are always done as clean builds, to make sure it is
not influenced by any uncontrolled factors (like leftover files from a
previous build). A "full" Builder which behaves this way would want
to use the `mode='clobber'' setting.

   Each VC system has a corresponding source checkout class: their
arguments are described on the following pages.

* Menu:

* CVS::
* SVN::
* Darcs::
* Mercurial::
* Arch::
* Bazaar::
* Bzr::
* P4::
* Git::


File: buildbot.info,  Node: CVS,  Next: SVN,  Prev: Source Checkout,  Up: Source Checkout

6.1.3.1 CVS
...........

The `CVS' build step performs a CVS (http://www.nongnu.org/cvs/)
checkout or update. It takes the following arguments:

`cvsroot'
     (required): specify the CVSROOT value, which points to a CVS
     repository, probably on a remote machine. For example, the
     cvsroot value you would use to get a copy of the Buildbot source
     code is
     `:pserver:anonymous@cvs.sourceforge.net:/cvsroot/buildbot'

`cvsmodule'
     (required): specify the cvs `module', which is generally a
     subdirectory of the CVSROOT. The cvsmodule for the Buildbot
     source code is `buildbot'.

`branch'
     a string which will be used in a `-r' argument. This is most
     useful for specifying a branch to work on. Defaults to `HEAD'.

`global_options'
     a list of flags to be put before the verb in the CVS command.

`checkoutDelay'
     if set, the number of seconds to put between the timestamp of
     the last known Change and the value used for the `-D' option.
     Defaults to half of the parent Build's treeStableTimer.



File: buildbot.info,  Node: SVN,  Next: Darcs,  Prev: CVS,  Up: Source Checkout

6.1.3.2 SVN
...........

The `SVN' build step performs a Subversion
(http://subversion.tigris.org) checkout or update.  There are two
basic ways of setting up the checkout step, depending upon whether
you are using multiple branches or not.

   If all of your builds use the same branch, then you should create
the `SVN' step with the `svnurl' argument:

`svnurl'
     (required): this specifies the `URL' argument that will be given
     to the `svn checkout' command. It dictates both where the
     repository is located and which sub-tree should be extracted. In
     this respect, it is like a combination of the CVS `cvsroot' and
     `cvsmodule' arguments. For example, if you are using a remote
     Subversion repository which is accessible through HTTP at a URL
     of `http://svn.example.com/repos', and you wanted to check out
     the `trunk/calc' sub-tree, you would use
     `svnurl="http://svn.example.com/repos/trunk/calc"' as an argument
     to your `SVN' step.

   If, on the other hand, you are building from multiple branches,
then you should create the `SVN' step with the `baseURL' and
`defaultBranch' arguments instead:

`baseURL'
     (required): this specifies the base repository URL, to which a
     branch name will be appended. It should probably end in a slash.

`defaultBranch'
     this specifies the name of the branch to use when a Build does
     not provide one of its own. This will be appended to `baseURL' to
     create the string that will be passed to the `svn checkout'
     command.

`username'
     if specified, this will be passed to the `svn' binary with a
     `--username' option.

`password'
     if specified, this will be passed to the `svn' binary with a
     `--password' option.  The password itself will be suitably
     obfuscated in the logs.


   If you are using branches, you must also make sure your
`ChangeSource' will report the correct branch names.

branch example
==============

Let's suppose that the "MyProject" repository uses branches for the
trunk, for various users' individual development efforts, and for
several new features that will require some amount of work (involving
multiple developers) before they are ready to merge onto the trunk.
Such a repository might be organized as follows:

     svn://svn.example.org/MyProject/trunk
     svn://svn.example.org/MyProject/branches/User1/foo
     svn://svn.example.org/MyProject/branches/User1/bar
     svn://svn.example.org/MyProject/branches/User2/baz
     svn://svn.example.org/MyProject/features/newthing
     svn://svn.example.org/MyProject/features/otherthing

   Further assume that we want the Buildbot to run tests against the
trunk and against all the feature branches (i.e., do a
checkout/compile/build of branch X when a file has been changed on
branch X, when X is in the set [trunk, features/newthing,
features/otherthing]). We do not want the Buildbot to automatically
build any of the user branches, but it should be willing to build a
user branch when explicitly requested (most likely by the user who
owns that branch).

   There are three things that need to be set up to accomodate this
system. The first is a ChangeSource that is capable of identifying the
branch which owns any given file. This depends upon a user-supplied
function, in an external program that runs in the SVN commit hook and
connects to the buildmaster's `PBChangeSource' over a TCP connection.
(you can use the "`buildbot sendchange'" utility for this purpose,
but you will still need an external program to decide what value
should be passed to the `--branch=' argument).  For example, a change
to a file with the SVN url of
"svn://svn.example.org/MyProject/features/newthing/src/foo.c" should
be broken down into a Change instance with
`branch='features/newthing'' and `file='src/foo.c''.

   The second piece is an `AnyBranchScheduler' which will pay
attention to the desired branches. It will not pay attention to the
user branches, so it will not automatically start builds in response
to changes there. The AnyBranchScheduler class requires you to
explicitly list all the branches you want it to use, but it would not
be difficult to write a subclass which used
`branch.startswith('features/'' to remove the need for this explicit
list. Or, if you want to build user branches too, you can use
AnyBranchScheduler with `branches=None' to indicate that you want it
to pay attention to all branches.

   The third piece is an `SVN' checkout step that is configured to
handle the branches correctly, with a `baseURL' value that matches
the way the ChangeSource splits each file's URL into base, branch,
and file.

     from buildbot.changes.pb import PBChangeSource
     from buildbot.scheduler import AnyBranchScheduler
     from buildbot.process import source, factory
     from buildbot.steps import source, shell

     c['change_source'] = PBChangeSource()
     s1 = AnyBranchScheduler('main',
                             ['trunk', 'features/newthing', 'features/otherthing'],
                             10*60, ['test-i386', 'test-ppc'])
     c['schedulers'] = [s1]

     f = factory.BuildFactory()
     f.addStep(source.SVN(mode='update',
                          baseURL='svn://svn.example.org/MyProject/',
                          defaultBranch='trunk'))
     f.addStep(shell.Compile(command="make all"))
     f.addStep(shell.Test(command="make test"))

     c['builders'] = [
       {'name':'test-i386', 'slavename':'bot-i386', 'builddir':'test-i386',
                            'factory':f },
       {'name':'test-ppc', 'slavename':'bot-ppc', 'builddir':'test-ppc',
                           'factory':f },
      ]

   In this example, when a change arrives with a `branch' attribute
of "trunk", the resulting build will have an SVN step that
concatenates "svn://svn.example.org/MyProject/" (the baseURL) with
"trunk" (the branch name) to get the correct svn command. If the
"newthing" branch has a change to "src/foo.c", then the SVN step will
concatenate "svn://svn.example.org/MyProject/" with
"features/newthing" to get the svnurl for checkout.


File: buildbot.info,  Node: Darcs,  Next: Mercurial,  Prev: SVN,  Up: Source Checkout

6.1.3.3 Darcs
.............

The `Darcs' build step performs a Darcs (http://darcs.net/) checkout
or update.

   Like *Note SVN::, this step can either be configured to always
check out a specific tree, or set up to pull from a particular branch
that gets specified separately for each build. Also like SVN, the
repository URL given to Darcs is created by concatenating a `baseURL'
with the branch name, and if no particular branch is requested, it
uses a `defaultBranch'. The only difference in usage is that each
potential Darcs repository URL must point to a fully-fledged
repository, whereas SVN URLs usually point to sub-trees of the main
Subversion repository. In other words, doing an SVN checkout of
`baseURL' is legal, but silly, since you'd probably wind up with a
copy of every single branch in the whole repository.  Doing a Darcs
checkout of `baseURL' is just plain wrong, since the parent directory
of a collection of Darcs repositories is not itself a valid
repository.

   The Darcs step takes the following arguments:

`repourl'
     (required unless `baseURL' is provided): the URL at which the
     Darcs source repository is available.

`baseURL'
     (required unless `repourl' is provided): the base repository URL,
     to which a branch name will be appended. It should probably end
     in a slash.

`defaultBranch'
     (allowed if and only if `baseURL' is provided): this specifies
     the name of the branch to use when a Build does not provide one
     of its own. This will be appended to `baseURL' to create the
     string that will be passed to the `darcs get' command.


File: buildbot.info,  Node: Mercurial,  Next: Arch,  Prev: Darcs,  Up: Source Checkout

6.1.3.4 Mercurial
.................

The `Mercurial' build step performs a Mercurial
(http://selenic.com/mercurial) (aka "hg") checkout or update.

   Branches are handled just like *Note Darcs::.

   The Mercurial step takes the following arguments:

`repourl'
     (required unless `baseURL' is provided): the URL at which the
     Mercurial source repository is available.

`baseURL'
     (required unless `repourl' is provided): the base repository URL,
     to which a branch name will be appended. It should probably end
     in a slash.

`defaultBranch'
     (allowed if and only if `baseURL' is provided): this specifies
     the name of the branch to use when a Build does not provide one
     of its own. This will be appended to `baseURL' to create the
     string that will be passed to the `hg clone' command.


File: buildbot.info,  Node: Arch,  Next: Bazaar,  Prev: Mercurial,  Up: Source Checkout

6.1.3.5 Arch
............

The `Arch' build step performs an Arch (http://gnuarch.org/) checkout
or update using the `tla' client. It takes the following arguments:

`url'
     (required): this specifies the URL at which the Arch source
     archive is available.

`version'
     (required): this specifies which "development line" (like a
     branch) should be used. This provides the default branch name,
     but individual builds may specify a different one.

`archive'
     (optional): Each repository knows its own archive name. If this
     parameter is provided, it must match the repository's archive
     name.  The parameter is accepted for compatibility with the
     `Bazaar' step, below.



File: buildbot.info,  Node: Bazaar,  Next: Bzr,  Prev: Arch,  Up: Source Checkout

6.1.3.6 Bazaar
..............

`Bazaar' is an alternate implementation of the Arch VC system, which
uses a client named `baz'. The checkout semantics are just different
enough from `tla' that there is a separate BuildStep for it.

   It takes exactly the same arguments as `Arch', except that the
`archive=' parameter is required. (baz does not emit the archive name
when you do `baz register-archive', so we must provide it ourselves).


File: buildbot.info,  Node: Bzr,  Next: P4,  Prev: Bazaar,  Up: Source Checkout

6.1.3.7 Bzr
...........

`bzr' is a descendant of Arch/Baz, and is frequently referred to as
simply "Bazaar". The repository-vs-workspace model is similar to
Darcs, but it uses a strictly linear sequence of revisions (one
history per branch) like Arch. Branches are put in subdirectories.
This makes it look very much like Mercurial, so it takes the same
arguments:

`repourl'
     (required unless `baseURL' is provided): the URL at which the
     Bzr source repository is available.

`baseURL'
     (required unless `repourl' is provided): the base repository URL,
     to which a branch name will be appended. It should probably end
     in a slash.

`defaultBranch'
     (allowed if and only if `baseURL' is provided): this specifies
     the name of the branch to use when a Build does not provide one
     of its own. This will be appended to `baseURL' to create the
     string that will be passed to the `bzr checkout' command.


File: buildbot.info,  Node: P4,  Next: Git,  Prev: Bzr,  Up: Source Checkout

6.1.3.8 P4
..........

The `P4' build step creates a Perforce (http://www.perforce.com/)
client specification and performs an update.

`p4base'
     A view into the Perforce depot without branch name or trailing
     "...".  Typically "//depot/proj/".

`defaultBranch'
     A branch name to append on build requests if none is specified.
     Typically "trunk".

`p4port'
     (optional): the host:port string describing how to get to the P4
     Depot (repository), used as the -p argument for all p4 commands.

`p4user'
     (optional): the Perforce user, used as the -u argument to all p4
     commands.

`p4passwd'
     (optional): the Perforce password, used as the -p argument to
     all p4 commands.

`p4extra_views'
     (optional): a list of (depotpath, clientpath) tuples containing
     extra views to be mapped into the client specification. Both
     will have "/..." appended automatically. The client name and
     source directory will be prepended to the client path.

`p4client'
     (optional): The name of the client to use. In mode='copy' and
     mode='update', it's particularly important that a unique name is
     used for each checkout directory to avoid incorrect
     synchronization. For this reason, Python percent substitution
     will be performed on this value to replace %(slave)s with the
     slave name and %(builder)s with the builder name. The default is
     "buildbot_%(slave)s_%(build)s".


File: buildbot.info,  Node: Git,  Prev: P4,  Up: Source Checkout

6.1.3.9 Git
...........

The `Git' build step clones or updates a Git (http://git.or.cz/)
repository and checks out the specified branch or revision. Note that
the buildbot supports Git version 1.2.0 and later: earlier versions
(such as the one shipped in Ubuntu 'Dapper') do not support the `git
init' command that the buildbot uses.

   The Git step takes the following arguments:

`repourl'
     (required): the URL of the upstream Git repository.

`branch'
     (optional): this specifies the name of the branch to use when a
     Build does not provide one of its own. If this this parameter is
     not specified, and the Build does not provide a branch, the
     "master" branch will be used.


File: buildbot.info,  Node: ShellCommand,  Next: Simple ShellCommand Subclasses,  Prev: Source Checkout,  Up: Build Steps

6.1.4 ShellCommand
------------------

This is a useful base class for just about everything you might want
to do during a build (except for the initial source checkout). It runs
a single command in a child shell on the buildslave. All stdout/stderr
is recorded into a LogFile. The step finishes with a status of FAILURE
if the command's exit code is non-zero, otherwise it has a status of
SUCCESS.

   The preferred way to specify the command is with a list of argv
strings, since this allows for spaces in filenames and avoids doing
any fragile shell-escaping. You can also specify the command with a
single string, in which case the string is given to '/bin/sh -c
COMMAND' for parsing.

   On Windows, commands are run via `cmd.exe /c' which works well.
However, if you're running a batch file, the error level does not get
propagated correctly unless you add 'call' before your batch file's
name: `cmd=['call', 'myfile.bat', ...]'.

   All ShellCommands are run by default in the "workdir", which
defaults to the "`build'" subdirectory of the slave builder's base
directory. The absolute path of the workdir will thus be the slave's
basedir (set as an option to `buildbot create-slave', *note Creating
a buildslave::) plus the builder's basedir (set in the builder's
`c['builddir']' key in master.cfg) plus the workdir itself (a
class-level attribute of the BuildFactory, defaults to "`build'").

   `ShellCommand' arguments:

`command'
     a list of strings (preferred) or single string (discouraged)
     which specifies the command to be run. A list of strings is
     preferred because it can be used directly as an argv array.
     Using a single string (with embedded spaces) requires the
     buildslave to pass the string to /bin/sh for interpretation,
     which raises all sorts of difficult questions about how to
     escape or interpret shell metacharacters.

`env'
     a dictionary of environment strings which will be added to the
     child command's environment. For example, to run tests with a
     different i18n language setting, you might use

          f.addStep(ShellCommand(command=["make", "test"],
                                 env={'LANG': 'fr_FR'}))

     These variable settings will override any existing ones in the
     buildslave's environment or the environment specified in the
     Builder. The exception is PYTHONPATH, which is merged with
     (actually prepended to) any existing $PYTHONPATH setting. The
     value is treated as a list of directories to prepend, and a
     single string is treated like a one-item list. For example, to
     prepend both `/usr/local/lib/python2.3' and
     `/home/buildbot/lib/python' to any existing $PYTHONPATH setting,
     you would do something like the following:

          f.addStep(ShellCommand(
                        command=["make", "test"],
                        env={'PYTHONPATH': ["/usr/local/lib/python2.3",
                                             "/home/buildbot/lib/python"] }))

`want_stdout'
     if False, stdout from the child process is discarded rather than
     being sent to the buildmaster for inclusion in the step's
     LogFile.

`want_stderr'
     like `want_stdout' but for stderr. Note that commands run through
     a PTY do not have separate stdout/stderr streams: both are
     merged into stdout.

`usePTY'
     Should this command be run in a `pty'?  The default is to
     observe the configuration of the client (*note Buildslave
     Options::), but specifying `True' or `False' here will override
     the default.

     The advantage of using a PTY is that "grandchild" processes are
     more likely to be cleaned up if the build is interrupted or
     times out (since it enables the use of a "process group" in
     which all child processes will be placed). The disadvantages:
     some forms of Unix have problems with PTYs, some of your unit
     tests may behave differently when run under a PTY (generally
     those which check to see if they are being run interactively),
     and PTYs will merge the stdout and stderr streams into a single
     output stream (which means the red-vs-black coloring in the
     logfiles will be lost).

`logfiles'
     Sometimes commands will log interesting data to a local file,
     rather than emitting everything to stdout or stderr. For
     example, Twisted's "trial" command (which runs unit tests) only
     presents summary information to stdout, and puts the rest into a
     file named `_trial_temp/test.log'. It is often useful to watch
     these files as the command runs, rather than using `/bin/cat' to
     dump their contents afterwards.

     The `logfiles=' argument allows you to collect data from these
     secondary logfiles in near-real-time, as the step is running. It
     accepts a dictionary which maps from a local Log name (which is
     how the log data is presented in the build results) to a remote
     filename (interpreted relative to the build's working
     directory). Each named file will be polled on a regular basis
     (every couple of seconds) as the build runs, and any new text
     will be sent over to the buildmaster.

          f.addStep(ShellCommand(
                        command=["make", "test"],
                        logfiles={"triallog": "_trial_temp/test.log"}))

`timeout'
     if the command fails to produce any output for this many
     seconds, it is assumed to be locked up and will be killed.

`description'
     This will be used to describe the command (on the Waterfall
     display) while the command is still running. It should be a
     single imperfect-tense verb, like "compiling" or "testing". The
     preferred form is a list of short strings, which allows the HTML
     Waterfall display to create narrower columns by emitting a <br>
     tag between each word. You may also provide a single string.

`descriptionDone'
     This will be used to describe the command once it has finished. A
     simple noun like "compile" or "tests" should be used. Like
     `description', this may either be a list of short strings or a
     single string.

     If neither `description' nor `descriptionDone' are set, the
     actual command arguments will be used to construct the
     description.  This may be a bit too wide to fit comfortably on
     the Waterfall display.

          f.addStep(ShellCommand(command=["make", "test"],
                                 description=["testing"],
                                 descriptionDone=["tests"]))

`logEnviron'
     If this option is true (the default), then the step's logfile
     will describe the environment variables on the slave.  In
     situations where the environment is not relevant and is long, it
     may be easier to set `logEnviron=False'.



File: buildbot.info,  Node: Simple ShellCommand Subclasses,  Next: Python BuildSteps,  Prev: ShellCommand,  Up: Build Steps

6.1.5 Simple ShellCommand Subclasses
------------------------------------

Several subclasses of ShellCommand are provided as starting points for
common build steps. These are all very simple: they just override a
few parameters so you don't have to specify them yourself, making the
master.cfg file less verbose.

* Menu:

* Configure::
* Compile::
* Test::
* TreeSize::
* PerlModuleTest::
* SetProperty::


File: buildbot.info,  Node: Configure,  Next: Compile,  Prev: Simple ShellCommand Subclasses,  Up: Simple ShellCommand Subclasses

6.1.5.1 Configure
.................

This is intended to handle the `./configure' step from autoconf-style
projects, or the `perl Makefile.PL' step from perl MakeMaker.pm-style
modules. The default command is `./configure' but you can change this
by providing a `command=' parameter.


File: buildbot.info,  Node: Compile,  Next: Test,  Prev: Configure,  Up: Simple ShellCommand Subclasses

6.1.5.2 Compile
...............

This is meant to handle compiling or building a project written in C.
The default command is `make all'. When the compile is finished, the
log file is scanned for GCC warning messages, a summary log is
created with any problems that were seen, and the step is marked as
WARNINGS if any were discovered. The number of warnings is stored in a
Build Property named "warnings-count", which is accumulated over all
Compile steps (so if two warnings are found in one step, and three are
found in another step, the overall build will have a "warnings-count"
property of 5.

   The default regular expression used to detect a warning is
`'.*warning[: ].*'' , which is fairly liberal and may cause
false-positives. To use a different regexp, provide a
`warningPattern=' argument, or use a subclass which sets the
`warningPattern' attribute:

     f.addStep(Compile(command=["make", "test"],
                       warningPattern="^Warning: "))

   The `warningPattern=' can also be a pre-compiled python regexp
object: this makes it possible to add flags like `re.I' (to use
case-insensitive matching).

   (TODO: this step needs to be extended to look for GCC error
messages as well, and collect them into a separate logfile, along
with the source code filenames involved).


File: buildbot.info,  Node: Test,  Next: TreeSize,  Prev: Compile,  Up: Simple ShellCommand Subclasses

6.1.5.3 Test
............

This is meant to handle unit tests. The default command is `make
test', and the `warnOnFailure' flag is set.


File: buildbot.info,  Node: TreeSize,  Next: PerlModuleTest,  Prev: Test,  Up: Simple ShellCommand Subclasses

6.1.5.4 TreeSize
................

This is a simple command that uses the 'du' tool to measure the size
of the code tree. It puts the size (as a count of 1024-byte blocks,
aka 'KiB' or 'kibibytes') on the step's status text, and sets a build
property named 'tree-size-KiB' with the same value.


File: buildbot.info,  Node: PerlModuleTest,  Next: SetProperty,  Prev: TreeSize,  Up: Simple ShellCommand Subclasses

6.1.5.5 PerlModuleTest
......................

This is a simple command that knows how to run tests of perl modules.
It parses the output to determine the number of tests passed and
failed and total number executed, saving the results for later query.


File: buildbot.info,  Node: SetProperty,  Prev: PerlModuleTest,  Up: Simple ShellCommand Subclasses

6.1.5.6 SetProperty
...................

This buildstep is similar to ShellCommand, except that it captures the
output of the command into a property.  It is usually used like this:

     f.addStep(SetProperty(command="uname -a", property="uname"))

   This runs `uname -a' and captures its stdout, stripped of leading
and trailing whitespace, in the property "uname".  To avoid stripping,
add `strip=False'.  The `property' argument can be specified as a
`WithProperties' object.

   The more advanced usage allows you to specify a function to extract
properties from the command output.  Here you can use regular
expressions, string interpolation, or whatever you would like.  The
function is called with three arguments: the exit status of the
command, its standard output as a string, and its standard error as a
string.  It should return a dictionary containing all new properties.

     def glob2list(rc, stdout, stderr):
         jpgs = [ l.strip() for l in stdout.split('\n') ]
         return { 'jpgs' : jpgs }
     f.addStep(SetProperty(command="ls -1 *.jpg", extract_fn=glob2list))

   Note that any ordering relationship of the contents of stdout and
stderr is lost.  For example, given

     f.addStep(SetProperty(
         command="echo output1; echo error >&2; echo output2",
         extract_fn=my_extract))

   Then `my_extract' will see `stdout="output1\noutput2\n"' and
`stderr="error\n"'.


File: buildbot.info,  Node: Python BuildSteps,  Next: Transferring Files,  Prev: Simple ShellCommand Subclasses,  Up: Build Steps

6.1.6 Python BuildSteps
-----------------------

Here are some BuildSteps that are specifcally useful for projects
implemented in Python.

* Menu:

* BuildEPYDoc::
* PyFlakes::
* PyLint::


File: buildbot.info,  Node: BuildEPYDoc,  Next: PyFlakes,  Up: Python BuildSteps

6.1.6.1 BuildEPYDoc
...................

epydoc (http://epydoc.sourceforge.net/) is a tool for generating API
documentation for Python modules from their docstrings. It reads all
the .py files from your source tree, processes the docstrings
therein, and creates a large tree of .html files (or a single .pdf
file).

   The `buildbot.steps.python.BuildEPYDoc' step will run `epydoc' to
produce this API documentation, and will count the errors and
warnings from its output.

   You must supply the command line to be used. The default is `make
epydocs', which assumes that your project has a Makefile with an
"epydocs" target. You might wish to use something like `epydoc -o
apiref source/PKGNAME' instead. You might also want to add `--pdf' to
generate a PDF file instead of a large tree of HTML files.

   The API docs are generated in-place in the build tree (under the
workdir, in the subdirectory controlled by the "-o" argument). To
make them useful, you will probably have to copy them to somewhere
they can be read. A command like `rsync -ad apiref/
dev.example.com:~public_html/current-apiref/' might be useful. You
might instead want to bundle them into a tarball and publish it in the
same place where the generated install tarball is placed.

     from buildbot.steps.python import BuildEPYDoc

     ...
     f.addStep(BuildEPYDoc(command=["epydoc", "-o", "apiref", "source/mypkg"]))


File: buildbot.info,  Node: PyFlakes,  Next: PyLint,  Prev: BuildEPYDoc,  Up: Python BuildSteps

6.1.6.2 PyFlakes
................

PyFlakes (http://divmod.org/trac/wiki/DivmodPyflakes) is a tool to
perform basic static analysis of Python code to look for simple
errors, like missing imports and references of undefined names. It is
like a fast and simple form of the C "lint" program. Other tools
(like pychecker) provide more detailed results but take longer to run.

   The `buildbot.steps.python.PyFlakes' step will run pyflakes and
count the various kinds of errors and warnings it detects.

   You must supply the command line to be used. The default is `make
pyflakes', which assumes you have a top-level Makefile with a
"pyflakes" target. You might want to use something like `pyflakes .'
or `pyflakes src'.

     from buildbot.steps.python import PyFlakes

     ...
     f.addStep(PyFlakes(command=["pyflakes", "src"]))


File: buildbot.info,  Node: PyLint,  Prev: PyFlakes,  Up: Python BuildSteps

6.1.6.3 PyLint
..............

Similarly, the `buildbot.steps.python.PyLint' step will run pylint and
analyze the results.

   You must supply the command line to be used. There is no default.

     from buildbot.steps.python import PyLint

     ...
     f.addStep(PyLint(command=["pylint", "src"]))


File: buildbot.info,  Node: Transferring Files,  Next: Steps That Run on the Master,  Prev: Python BuildSteps,  Up: Build Steps

6.1.7 Transferring Files
------------------------

Most of the work involved in a build will take place on the
buildslave. But occasionally it is useful to do some work on the
buildmaster side. The most basic way to involve the buildmaster is
simply to move a file from the slave to the master, or vice versa.
There are a pair of BuildSteps named `FileUpload' and `FileDownload'
to provide this functionality. `FileUpload' moves a file _up to_ the
master, while `FileDownload' moves a file _down from_ the master.

   As an example, let's assume that there is a step which produces an
HTML file within the source tree that contains some sort of generated
project documentation. We want to move this file to the buildmaster,
into a `~/public_html' directory, so it can be visible to developers.
This file will wind up in the slave-side working directory under the
name `docs/reference.html'. We want to put it into the master-side
`~/public_html/ref.html'.

     from buildbot.steps.shell import ShellCommand
     from buildbot.steps.transfer import FileUpload

     f.addStep(ShellCommand(command=["make", "docs"]))
     f.addStep(FileUpload(slavesrc="docs/reference.html",
                          masterdest="~/public_html/ref.html"))

   The `masterdest=' argument will be passed to os.path.expanduser,
so things like "~" will be expanded properly. Non-absolute paths will
be interpreted relative to the buildmaster's base directory.
Likewise, the `slavesrc=' argument will be expanded and interpreted
relative to the builder's working directory.

   To move a file from the master to the slave, use the
`FileDownload' command. For example, let's assume that some step
requires a configuration file that, for whatever reason, could not be
recorded in the source code repository or generated on the buildslave
side:

     from buildbot.steps.shell import ShellCommand
     from buildbot.steps.transfer import FileUpload

     f.addStep(FileDownload(mastersrc="~/todays_build_config.txt",
                            slavedest="build_config.txt"))
     f.addStep(ShellCommand(command=["make", "config"]))

   Like `FileUpload', the `mastersrc=' argument is interpreted
relative to the buildmaster's base directory, and the `slavedest='
argument is relative to the builder's working directory. If the
buildslave is running in `~buildslave', and the builder's "builddir"
is something like `tests-i386', then the workdir is going to be
`~buildslave/tests-i386/build', and a `slavedest=' of `foo/bar.html'
will get put in `~buildslave/tests-i386/build/foo/bar.html'. Both of
these commands will create any missing intervening directories.

Other Parameters
----------------

The `maxsize=' argument lets you set a maximum size for the file to
be transferred. This may help to avoid surprises: transferring a
100MB coredump when you were expecting to move a 10kB status file
might take an awfully long time. The `blocksize=' argument controls
how the file is sent over the network: larger blocksizes are slightly
more efficient but also consume more memory on each end, and there is
a hard-coded limit of about 640kB.

   The `mode=' argument allows you to control the access permissions
of the target file, traditionally expressed as an octal integer. The
most common value is probably 0755, which sets the "x" executable bit
on the file (useful for shell scripts and the like). The default
value for `mode=' is None, which means the permission bits will
default to whatever the umask of the writing process is. The default
umask tends to be fairly restrictive, but at least on the buildslave
you can make it less restrictive with a -umask command-line option at
creation time (*note Buildslave Options::).

Transfering Directories
-----------------------

To transfer complete directories from the buildslave to the master,
there is a BuildStep named `DirectoryUpload'. It works like
`FileUpload', just for directories. However it does not support the
`maxsize', `blocksize' and `mode' arguments. As an example, let's
assume an generated project documentation, which consists of many
files (like the output of doxygen or epydoc). We want to move the
entire documentation to the buildmaster, into a `~/public_html/docs'
directory. On the slave-side the directory can be found under `docs':

     from buildbot.steps.shell import ShellCommand
     from buildbot.steps.transfer import DirectoryUpload

     f.addStep(ShellCommand(command=["make", "docs"]))
     f.addStep(DirectoryUpload(slavesrc="docs",
     				masterdest="~/public_html/docs"))

   The DirectoryUpload step will create all necessary directories and
transfers empty directories, too.


File: buildbot.info,  Node: Steps That Run on the Master,  Next: Triggering Schedulers,  Prev: Transferring Files,  Up: Build Steps

6.1.8 Steps That Run on the Master
----------------------------------

Occasionally, it is useful to execute some task on the master, for
example to create a directory, deploy a build result, or trigger some
other centralized processing.  This is possible, in a limited
fashion, with the `MasterShellCommand' step.

   This step operates similarly to a regular `ShellCommand', but
executes on the master, instead of the slave.  To be clear, the
enclosing `Build' object must still have a slave object, just as for
any other step - only, in this step, the slave does not do anything.

   In this example, the step renames a tarball based on the day of
the week.

     from buildbot.steps.transfer import FileUpload
     from buildbot.steps.master import MasterShellCommand

     f.addStep(FileUpload(slavesrc="widgetsoft.tar.gz",
                          masterdest="/var/buildoutputs/widgetsoft-new.tar.gz"))
     f.addStep(MasterShellCommand(command="""
         cd /var/buildoutputs;
         mv widgetsoft-new.tar.gz widgetsoft-`date +%a`.tar.gz"""))


File: buildbot.info,  Node: Triggering Schedulers,  Next: Writing New BuildSteps,  Prev: Steps That Run on the Master,  Up: Build Steps

6.1.9 Triggering Schedulers
---------------------------

The counterpart to the Triggerable described in section *note
Triggerable Scheduler:: is the Trigger BuildStep.

     from buildbot.steps.trigger import Trigger
     f.addStep(Trigger(schedulerNames=['build-prep'],
                       waitForFinish=True,
                       updateSourceStamp=True))

   The `schedulerNames=' argument lists the Triggerables that should
be triggered when this step is executed.  Note that it is possible,
but not advisable, to create a cycle where a build continually
triggers itself, because the schedulers are specified by name.

   If `waitForFinish' is True, then the step will not finish until
all of the builds from the triggered schedulers have finished. If this
argument is False (the default) or not given, then the buildstep
succeeds immediately after triggering the schedulers.

   If `updateSourceStamp' is True (the default), then step updates
the SourceStamp given to the Triggerables to include `got_revision'
(the revision actually used in this build) as `revision' (the
revision to use in the triggered builds). This is useful to ensure
that all of the builds use exactly the same SourceStamp, even if
other Changes have occurred while the build was running.


File: buildbot.info,  Node: Writing New BuildSteps,  Prev: Triggering Schedulers,  Up: Build Steps

6.1.10 Writing New BuildSteps
-----------------------------

While it is a good idea to keep your build process self-contained in
the source code tree, sometimes it is convenient to put more
intelligence into your Buildbot configuration. One way to do this is
to write a custom BuildStep. Once written, this Step can be used in
the `master.cfg' file.

   The best reason for writing a custom BuildStep is to better parse
the results of the command being run. For example, a BuildStep that
knows about JUnit could look at the logfiles to determine which tests
had been run, how many passed and how many failed, and then report
more detailed information than a simple `rc==0' -based "good/bad"
decision.

* Menu:

* Writing BuildStep Constructors::
* BuildStep LogFiles::
* Reading Logfiles::
* Adding LogObservers::
* BuildStep URLs::


File: buildbot.info,  Node: Writing BuildStep Constructors,  Next: BuildStep LogFiles,  Up: Writing New BuildSteps

6.1.10.1 Writing BuildStep Constructors
.......................................

BuildStep classes have some extra equipment, because they are their
own factories.  Consider the use of a BuildStep in `master.cfg':

     f.addStep(MyStep(someopt="stuff", anotheropt=1))

   This creates a single instance of class `MyStep'.  However,
Buildbot needs a new object each time the step is executed.  this is
accomplished by storing the information required to instantiate a new
object in the `factory' attribute.  When the time comes to construct
a new Build, BuildFactory consults this attribute (via
`getStepFactory') and instantiates a new step object.

   When writing a new step class, then, keep in mind are that you
cannot do anything "interesting" in the constructor - limit yourself
to checking and storing arguments.  To ensure that these arguments
are provided to any new objects, call `self.addFactoryArguments' with
any keyword arguments your constructor needs.

   Keep a `**kwargs' argument on the end of your options, and pass
that up to the parent class's constructor.

   The whole thing looks like this:

     class Frobinfy(LoggingBuildStep):
         def __init__(self,
                 frob_what="frobee",
                 frob_how_many=None,
                 frob_how=None,
                 **kwargs)

             # check
             if frob_how_many is None:
                 raise TypeError("Frobinfy argument how_many is required")

             # call parent
             LoggingBuildStep.__init__(self, **kwargs)

             # and record arguments for later
             self.addFactoryArguments(
                 frob_what=frob_what,
                 frob_how_many=frob_how_many,
                 frob_how=frob_how)

     class FastFrobnify(Frobnify):
         def __init__(self,
                 speed=5,
                 **kwargs)
             Frobnify.__init__(self, **kwargs)
             self.addFactoryArguments(
                 speed=speed)


File: buildbot.info,  Node: BuildStep LogFiles,  Next: Reading Logfiles,  Prev: Writing BuildStep Constructors,  Up: Writing New BuildSteps

6.1.10.2 BuildStep LogFiles
...........................

Each BuildStep has a collection of "logfiles". Each one has a short
name, like "stdio" or "warnings". Each LogFile contains an arbitrary
amount of text, usually the contents of some output file generated
during a build or test step, or a record of everything that was
printed to stdout/stderr during the execution of some command.

   These LogFiles are stored to disk, so they can be retrieved later.

   Each can contain multiple "channels", generally limited to three
basic ones: stdout, stderr, and "headers". For example, when a
ShellCommand runs, it writes a few lines to the "headers" channel to
indicate the exact argv strings being run, which directory the command
is being executed in, and the contents of the current environment
variables. Then, as the command runs, it adds a lot of "stdout" and
"stderr" messages. When the command finishes, a final "header" line
is added with the exit code of the process.

   Status display plugins can format these different channels in
different ways. For example, the web page shows LogFiles as text/html,
with header lines in blue text, stdout in black, and stderr in red. A
different URL is available which provides a text/plain format, in
which stdout and stderr are collapsed together, and header lines are
stripped completely. This latter option makes it easy to save the
results to a file and run `grep' or whatever against the output.

   Each BuildStep contains a mapping (implemented in a python
dictionary) from LogFile name to the actual LogFile objects. Status
plugins can get a list of LogFiles to display, for example, a list of
HREF links that, when clicked, provide the full contents of the
LogFile.

Using LogFiles in custom BuildSteps
===================================

The most common way for a custom BuildStep to use a LogFile is to
summarize the results of a ShellCommand (after the command has
finished running). For example, a compile step with thousands of lines
of output might want to create a summary of just the warning messages.
If you were doing this from a shell, you would use something like:

     grep "warning:" output.log >warnings.log

   In a custom BuildStep, you could instead create a "warnings"
LogFile that contained the same text. To do this, you would add code
to your `createSummary' method that pulls lines from the main output
log and creates a new LogFile with the results:

         def createSummary(self, log):
             warnings = []
             for line in log.readlines():
                 if "warning:" in line:
                     warnings.append()
             self.addCompleteLog('warnings', "".join(warnings))

   This example uses the `addCompleteLog' method, which creates a new
LogFile, puts some text in it, and then "closes" it, meaning that no
further contents will be added. This LogFile will appear in the HTML
display under an HREF with the name "warnings", since that is the
name of the LogFile.

   You can also use `addHTMLLog' to create a complete (closed)
LogFile that contains HTML instead of plain text. The normal LogFile
will be HTML-escaped if presented through a web page, but the HTML
LogFile will not. At the moment this is only used to present a pretty
HTML representation of an otherwise ugly exception traceback when
something goes badly wrong during the BuildStep.

   In contrast, you might want to create a new LogFile at the
beginning of the step, and add text to it as the command runs. You
can create the LogFile and attach it to the build by calling
`addLog', which returns the LogFile object. You then add text to this
LogFile by calling methods like `addStdout' and `addHeader'. When you
are done, you must call the `finish' method so the LogFile can be
closed. It may be useful to create and populate a LogFile like this
from a LogObserver method *Note Adding LogObservers::.

   The `logfiles=' argument to `ShellCommand' (see *note
ShellCommand::) creates new LogFiles and fills them in realtime by
asking the buildslave to watch a actual file on disk. The buildslave
will look for additions in the target file and report them back to
the BuildStep. These additions will be added to the LogFile by
calling `addStdout'. These secondary LogFiles can be used as the
source of a LogObserver just like the normal "stdio" LogFile.


File: buildbot.info,  Node: Reading Logfiles,  Next: Adding LogObservers,  Prev: BuildStep LogFiles,  Up: Writing New BuildSteps

6.1.10.3 Reading Logfiles
.........................

Once a LogFile has been added to a BuildStep with `addLog()',
`addCompleteLog()', `addHTMLLog()', or `logfiles=', your BuildStep
can retrieve it by using `getLog()':

     class MyBuildStep(ShellCommand):
         logfiles = { "nodelog": "_test/node.log" }

         def evaluateCommand(self, cmd):
             nodelog = self.getLog("nodelog")
             if "STARTED" in nodelog.getText():
                 return SUCCESS
             else:
                 return FAILURE

   For a complete list of the methods you can call on a LogFile,
please see the docstrings on the `IStatusLog' class in
`buildbot/interfaces.py'.


File: buildbot.info,  Node: Adding LogObservers,  Next: BuildStep URLs,  Prev: Reading Logfiles,  Up: Writing New BuildSteps

6.1.10.4 Adding LogObservers
............................

Most shell commands emit messages to stdout or stderr as they operate,
especially if you ask them nicely with a `--verbose' flag of some
sort. They may also write text to a log file while they run. Your
BuildStep can watch this output as it arrives, to keep track of how
much progress the command has made. You can get a better measure of
progress by counting the number of source files compiled or test cases
run than by merely tracking the number of bytes that have been written
to stdout. This improves the accuracy and the smoothness of the ETA
display.

   To accomplish this, you will need to attach a `LogObserver' to one
of the log channels, most commonly to the "stdio" channel but perhaps
to another one which tracks a log file. This observer is given all
text as it is emitted from the command, and has the opportunity to
parse that output incrementally. Once the observer has decided that
some event has occurred (like a source file being compiled), it can
use the `setProgress' method to tell the BuildStep about the progress
that this event represents.

   There are a number of pre-built `LogObserver' classes that you can
choose from (defined in `buildbot.process.buildstep', and of course
you can subclass them to add further customization. The
`LogLineObserver' class handles the grunt work of buffering and
scanning for end-of-line delimiters, allowing your parser to operate
on complete stdout/stderr lines. (Lines longer than a set maximum
length are dropped; the maximum defaults to 16384 bytes, but you can
change it by calling `setMaxLineLength()' on your `LogLineObserver'
instance.  Use `sys.maxint' for effective infinity.)

   For example, let's take a look at the `TrialTestCaseCounter',
which is used by the Trial step to count test cases as they are run.
As Trial executes, it emits lines like the following:

     buildbot.test.test_config.ConfigTest.testDebugPassword ... [OK]
     buildbot.test.test_config.ConfigTest.testEmpty ... [OK]
     buildbot.test.test_config.ConfigTest.testIRC ... [FAIL]
     buildbot.test.test_config.ConfigTest.testLocks ... [OK]

   When the tests are finished, trial emits a long line of "======"
and then some lines which summarize the tests that failed. We want to
avoid parsing these trailing lines, because their format is less
well-defined than the "[OK]" lines.

   The parser class looks like this:

     from buildbot.process.buildstep import LogLineObserver

     class TrialTestCaseCounter(LogLineObserver):
         _line_re = re.compile(r'^([\w\.]+) \.\.\. \[([^\]]+)\]$')
         numTests = 0
         finished = False

         def outLineReceived(self, line):
             if self.finished:
                 return
             if line.startswith("=" * 40):
                 self.finished = True
                 return

             m = self._line_re.search(line.strip())
             if m:
                 testname, result = m.groups()
                 self.numTests += 1
                 self.step.setProgress('tests', self.numTests)

   This parser only pays attention to stdout, since that's where trial
writes the progress lines. It has a mode flag named `finished' to
ignore everything after the "====" marker, and a scary-looking
regular expression to match each line while hopefully ignoring other
messages that might get displayed as the test runs.

   Each time it identifies a test has been completed, it increments
its counter and delivers the new progress value to the step with
`self.step.setProgress'. This class is specifically measuring
progress along the "tests" metric, in units of test cases (as opposed
to other kinds of progress like the "output" metric, which measures
in units of bytes). The Progress-tracking code uses each progress
metric separately to come up with an overall completion percentage
and an ETA value.

   To connect this parser into the `Trial' BuildStep,
`Trial.__init__' ends with the following clause:

             # this counter will feed Progress along the 'test cases' metric
             counter = TrialTestCaseCounter()
             self.addLogObserver('stdio', counter)
             self.progressMetrics += ('tests',)

   This creates a TrialTestCaseCounter and tells the step that the
counter wants to watch the "stdio" log. The observer is automatically
given a reference to the step in its `.step' attribute.

A Somewhat Whimsical Example
----------------------------

Let's say that we've got some snazzy new unit-test framework called
Framboozle. It's the hottest thing since sliced bread. It slices, it
dices, it runs unit tests like there's no tomorrow. Plus if your unit
tests fail, you can use its name for a Web 2.1 startup company, make
millions of dollars, and hire engineers to fix the bugs for you, while
you spend your afternoons lazily hang-gliding along a scenic pacific
beach, blissfully unconcerned about the state of your tests.(1)

   To run a Framboozle-enabled test suite, you just run the
'framboozler' command from the top of your source code tree. The
'framboozler' command emits a bunch of stuff to stdout, but the most
interesting bit is that it emits the line "FNURRRGH!" every time it
finishes running a test case(2). You'd like to have a test-case
counting LogObserver that watches for these lines and counts them,
because counting them will help the buildbot more accurately
calculate how long the build will take, and this will let you know
exactly how long you can sneak out of the office for your
hang-gliding lessons without anyone noticing that you're gone.

   This will involve writing a new BuildStep (probably named
"Framboozle") which inherits from ShellCommand. The BuildStep class
definition itself will look something like this:

     # START
     from buildbot.steps.shell import ShellCommand
     from buildbot.process.buildstep import LogLineObserver

     class FNURRRGHCounter(LogLineObserver):
         numTests = 0
         def outLineReceived(self, line):
             if "FNURRRGH!" in line:
                 self.numTests += 1
                 self.step.setProgress('tests', self.numTests)

     class Framboozle(ShellCommand):
         command = ["framboozler"]

         def __init__(self, **kwargs):
             ShellCommand.__init__(self, **kwargs)   # always upcall!
             counter = FNURRRGHCounter())
             self.addLogObserver('stdio', counter)
             self.progressMetrics += ('tests',)
     # FINISH

   So that's the code that we want to wind up using. How do we
actually deploy it?

   You have a couple of different options.

   Option 1: The simplest technique is to simply put this text
(everything from START to FINISH) in your master.cfg file, somewhere
before the BuildFactory definition where you actually use it in a
clause like:

     f = BuildFactory()
     f.addStep(SVN(svnurl="stuff"))
     f.addStep(Framboozle())

   Remember that master.cfg is secretly just a python program with one
job: populating the BuildmasterConfig dictionary. And python programs
are allowed to define as many classes as they like. So you can define
classes and use them in the same file, just as long as the class is
defined before some other code tries to use it.

   This is easy, and it keeps the point of definition very close to
the point of use, and whoever replaces you after that unfortunate
hang-gliding accident will appreciate being able to easily figure out
what the heck this stupid "Framboozle" step is doing anyways. The
downside is that every time you reload the config file, the Framboozle
class will get redefined, which means that the buildmaster will think
that you've reconfigured all the Builders that use it, even though
nothing changed. Bleh.

   Option 2: Instead, we can put this code in a separate file, and
import it into the master.cfg file just like we would the normal
buildsteps like ShellCommand and SVN.

   Create a directory named ~/lib/python, put everything from START to
FINISH in ~/lib/python/framboozle.py, and run your buildmaster using:

      PYTHONPATH=~/lib/python buildbot start MASTERDIR

   or use the `Makefile.buildbot' to control the way `buildbot start'
works. Or add something like this to something like your ~/.bashrc or
~/.bash_profile or ~/.cshrc:

      export PYTHONPATH=~/lib/python

   Once we've done this, our master.cfg can look like:

     from framboozle import Framboozle
     f = BuildFactory()
     f.addStep(SVN(svnurl="stuff"))
     f.addStep(Framboozle())

   or:

     import framboozle
     f = BuildFactory()
     f.addStep(SVN(svnurl="stuff"))
     f.addStep(framboozle.Framboozle())

   (check out the python docs for details about how "import" and
"from A import B" work).

   What we've done here is to tell python that every time it handles
an "import" statement for some named module, it should look in our
~/lib/python/ for that module before it looks anywhere else. After our
directories, it will try in a bunch of standard directories too
(including the one where buildbot is installed). By setting the
PYTHONPATH environment variable, you can add directories to the front
of this search list.

   Python knows that once it "import"s a file, it doesn't need to
re-import it again. This means that reconfiguring the buildmaster
(with "buildbot reconfig", for example) won't make it think the
Framboozle class has changed every time, so the Builders that use it
will not be spuriously restarted. On the other hand, you either have
to start your buildmaster in a slightly weird way, or you have to
modify your environment to set the PYTHONPATH variable.

   Option 3: Install this code into a standard python library
directory

   Find out what your python's standard include path is by asking it:

     80:warner@luther% python
     Python 2.4.4c0 (#2, Oct  2 2006, 00:57:46)
     [GCC 4.1.2 20060928 (prerelease) (Debian 4.1.1-15)] on linux2
     Type "help", "copyright", "credits" or "license" for more information.
     >>> import sys
     >>> import pprint
     >>> pprint.pprint(sys.path)
     ['',
      '/usr/lib/python24.zip',
      '/usr/lib/python2.4',
      '/usr/lib/python2.4/plat-linux2',
      '/usr/lib/python2.4/lib-tk',
      '/usr/lib/python2.4/lib-dynload',
      '/usr/local/lib/python2.4/site-packages',
      '/usr/lib/python2.4/site-packages',
      '/usr/lib/python2.4/site-packages/Numeric',
      '/var/lib/python-support/python2.4',
      '/usr/lib/site-python']

   In this case, putting the code into
/usr/local/lib/python2.4/site-packages/framboozle.py would work just
fine. We can use the same master.cfg "import framboozle" statement as
in Option 2. By putting it in a standard include directory (instead of
the decidedly non-standard ~/lib/python), we don't even have to set
PYTHONPATH to anything special. The downside is that you probably have
to be root to write to one of those standard include directories.

   Option 4: Submit the code for inclusion in the Buildbot
distribution

   Make a fork of buildbot on http://github.com/djmitche/buildbot or
post a patch in a bug at http://buildbot.net.  In either case, post a
note about your patch to the mailing list, so others can provide
feedback and, eventually, commit it.

     from buildbot.steps import framboozle
     f = BuildFactory()
     f.addStep(SVN(svnurl="stuff"))
     f.addStep(framboozle.Framboozle())

   And then you don't even have to install framboozle.py anywhere on
your system, since it will ship with Buildbot. You don't have to be
root, you don't have to set PYTHONPATH. But you do have to make a
good case for Framboozle being worth going into the main
distribution, you'll probably have to provide docs and some unit test
cases, you'll need to figure out what kind of beer the author likes,
and then you'll have to wait until the next release. But in some
environments, all this is easier than getting root on your
buildmaster box, so the tradeoffs may actually be worth it.

   Putting the code in master.cfg (1) makes it available to that
buildmaster instance. Putting it in a file in a personal library
directory (2) makes it available for any buildmasters you might be
running. Putting it in a file in a system-wide shared library
directory (3) makes it available for any buildmasters that anyone on
that system might be running. Getting it into the buildbot's upstream
repository (4) makes it available for any buildmasters that anyone in
the world might be running. It's all a matter of how widely you want
to deploy that new class.

   ---------- Footnotes ----------

   (1) framboozle.com is still available. Remember, I get 10% :).

   (2) Framboozle gets very excited about running unit tests.


File: buildbot.info,  Node: BuildStep URLs,  Prev: Adding LogObservers,  Up: Writing New BuildSteps

6.1.10.5 BuildStep URLs
.......................

Each BuildStep has a collection of "links". Like its collection of
LogFiles, each link has a name and a target URL. The web status page
creates HREFs for each link in the same box as it does for LogFiles,
except that the target of the link is the external URL instead of an
internal link to a page that shows the contents of the LogFile.

   These external links can be used to point at build information
hosted on other servers. For example, the test process might produce
an intricate description of which tests passed and failed, or some
sort of code coverage data in HTML form, or a PNG or GIF image with a
graph of memory usage over time. The external link can provide an
easy way for users to navigate from the buildbot's status page to
these external web sites or file servers. Note that the step itself is
responsible for insuring that there will be a document available at
the given URL (perhaps by using `scp' to copy the HTML output to a
`~/public_html/' directory on a remote web server). Calling `addURL'
does not magically populate a web server.

   To set one of these links, the BuildStep should call the `addURL'
method with the name of the link and the target URL. Multiple URLs can
be set.

   In this example, we assume that the `make test' command causes a
collection of HTML files to be created and put somewhere on the
coverage.example.org web server, in a filename that incorporates the
build number.

     class TestWithCodeCoverage(BuildStep):
         command = ["make", "test",
                    WithProperties("buildnum=%s" % "buildnumber")]

         def createSummary(self, log):
             buildnumber = self.getProperty("buildnumber")
             url = "http://coverage.example.org/builds/%s.html" % buildnumber
             self.addURL("coverage", url)

   You might also want to extract the URL from some special message
output by the build process itself:

     class TestWithCodeCoverage(BuildStep):
         command = ["make", "test",
                    WithProperties("buildnum=%s" % "buildnumber")]

         def createSummary(self, log):
             output = StringIO(log.getText())
             for line in output.readlines():
                 if line.startswith("coverage-url:"):
                     url = line[len("coverage-url:"):].strip()
                     self.addURL("coverage", url)
                     return

   Note that a build process which emits both stdout and stderr might
cause this line to be split or interleaved between other lines. It
might be necessary to restrict the getText() call to only stdout with
something like this:

             output = StringIO("".join([c[1]
                                        for c in log.getChunks()
                                        if c[0] == LOG_CHANNEL_STDOUT]))

   Of course if the build is run under a PTY, then stdout and stderr
will be merged before the buildbot ever sees them, so such
interleaving will be unavoidable.


File: buildbot.info,  Node: Interlocks,  Next: Build Factories,  Prev: Build Steps,  Up: Build Process

6.2 Interlocks
==============

Until now, we assumed that a master can run builds at any slave
whenever needed or desired.  Some times, you want to enforce
additional constraints on builds. For reasons like limited network
bandwidth, old slave machines, or a self-willed data base server, you
may want to limit the number of builds (or build steps) that can
access a resource.

   The mechanism used by Buildbot is known as the read/write lock.(1)
It allows either many readers or a single writer but not a
combination of readers and writers. The general lock has been
modified and extended for use in Buildbot. Firstly, the general lock
allows an infinite number of readers. In Buildbot, we often want to
put an upper limit on the number of readers, for example allowing two
out of five possible builds at the same time. To do this, the lock
counts the number of active readers. Secondly, the terms _read mode_
and _write mode_ are confusing in Buildbot context. They have been
replaced by _counting mode_ (since the lock counts them) and
_exclusive mode_.  As a result of these changes, locks in Buildbot
allow a number of builds (upto some fixed number) in counting mode,
or they allow one build in exclusive mode.

   Often, not all slaves are equal. To allow for this situation,
Buildbot allows to have a separate upper limit on the count for each
slave. In this way, you can have at most 3 concurrent builds at a
fast slave, 2 at a slightly older slave, and 1 at all other slaves.

   The final thing you can specify when you introduce a new lock is
its scope.  Some constraints are global - they must be enforced over
all slaves. Other constraints are local to each slave.  A _master
lock_ is used for the global constraints. You can ensure for example
that at most one build (of all builds running at all slaves) accesses
the data base server. With a _slave lock_ you can add a limit local
to each slave. With such a lock, you can for example enforce an upper
limit to the number of active builds at a slave, like above.

   Time for a few examples. Below a master lock is defined to protect
a data base, and a slave lock is created to limit the number of
builds at each slave.

     from buildbot import locks

     db_lock = locks.MasterLock("database")
     build_lock = locks.SlaveLock("slave_builds",
                                  maxCount = 1,
                                  maxCountForSlave = { 'fast': 3, 'new': 2 })

   After importing locks from buildbot, `db_lock' is defined to be a
master lock. The `"database"' string is used for uniquely identifying
the lock.  At the next line, a slave lock called `build_lock' is
created. It is identified by the `"slave_builds"' string. Since the
requirements of the lock are a bit more complicated, two optional
arguments are also specified. The `maxCount' parameter sets the
default limit for builds in counting mode to `1'. For the slave
called `'fast'' however, we want to have at most three builds, and
for the slave called `'new'' the upper limit is two builds running at
the same time.

   The next step is using the locks in builds.  Buildbot allows a
lock to be used during an entire build (from beginning to end), or
only during a single build step. In the latter case, the lock is
claimed for use just before the step starts, and released again when
the step ends. To prevent deadlocks,(2) it is not possible to claim
or release locks at other times.

   To use locks, you should add them with a `locks' argument.  Each
use of a lock is either in counting mode (that is, possibly shared
with other builds) or in exclusive mode. A build or build step
proceeds only when it has acquired all locks. If a build or step
needs a lot of locks, it may be starved(3) by other builds that need
fewer locks.

   To illustrate use of locks, a few examples.

     from buildbot import locks
     from buildbot.steps import source, shell
     from buildbot.process import factory

     db_lock = locks.MasterLock("database")
     build_lock = locks.SlaveLock("slave_builds",
                                  maxCount = 1,
                                  maxCountForSlave = { 'fast': 3, 'new': 2 })

     f = factory.BuildFactory()
     f.addStep(source.SVN(svnurl="http://example.org/svn/Trunk"))
     f.addStep(shell.ShellCommand(command="make all"))
     f.addStep(shell.ShellCommand(command="make test",
                                  locks=[db_lock.access('exclusive')]))

     b1 = {'name': 'full1', 'slavename': 'fast',  'builddir': 'f1', 'factory': f,
            'locks': [build_lock.access('counting')] }

     b2 = {'name': 'full2', 'slavename': 'new',   'builddir': 'f2', 'factory': f.
            'locks': [build_lock.access('counting')] }

     b3 = {'name': 'full3', 'slavename': 'old',   'builddir': 'f3', 'factory': f.
            'locks': [build_lock.access('counting')] }

     b4 = {'name': 'full4', 'slavename': 'other', 'builddir': 'f4', 'factory': f.
            'locks': [build_lock.access('counting')] }

     c['builders'] = [b1, b2, b3, b4]

   Here we have four slaves `b1', `b2', `b3', and `b4'. Each slave
performs the same checkout, make, and test build step sequence.  We
want to enforce that at most one test step is executed between all
slaves due to restrictions with the data base server. This is done by
adding the `locks=' parameter with the third step. It takes a list of
locks with their access mode. In this case only the `db_lock' is
needed. The exclusive access mode is used to ensure there is at most
one slave that executes the test step.

   In addition to exclusive accessing the data base, we also want
slaves to stay responsive even under the load of a large number of
builds being triggered.  For this purpose, the slave lock called
`build_lock' is defined. Since the restraint holds for entire builds,
the lock is specified in the builder with `'locks':
[build_lock.access('counting')]'.

   ---------- Footnotes ----------

   (1) See http://en.wikipedia.org/wiki/Read/write_lock_pattern for
more information.

   (2) Deadlock is the situation where two or more slaves each hold a
lock in exclusive mode, and in addition want to claim the lock held by
the other slave exclusively as well. Since locks allow at most one
exclusive user, both slaves will wait forever.

   (3) Starving is the situation that only a few locks are available,
and they are immediately grabbed by another build. As a result, it
may take a long time before all locks needed by the starved build are
free at the same time.


File: buildbot.info,  Node: Build Factories,  Prev: Interlocks,  Up: Build Process

6.3 Build Factories
===================

Each Builder is equipped with a "build factory", which is responsible
for producing the actual `Build' objects that perform each build.
This factory is created in the configuration file, and attached to a
Builder through the `factory' element of its dictionary.

   The standard `BuildFactory' object creates `Build' objects by
default. These Builds will each execute a collection of BuildSteps in
a fixed sequence. Each step can affect the results of the build, but
in general there is little intelligence to tie the different steps
together. You can create subclasses of `Build' to implement more
sophisticated build processes, and then use a subclass of
`BuildFactory' (or simply set the `buildClass' attribute) to create
instances of your new Build subclass.

* Menu:

* BuildStep Objects::
* BuildFactory::
* Process-Specific build factories::


File: buildbot.info,  Node: BuildStep Objects,  Next: BuildFactory,  Prev: Build Factories,  Up: Build Factories

6.3.1 BuildStep Objects
-----------------------

The steps used by these builds are all subclasses of `BuildStep'.
The standard ones provided with Buildbot are documented later, *Note
Build Steps::. You can also write your own subclasses to use in
builds.

   The basic behavior for a `BuildStep' is to:

   * run for a while, then stop

   * possibly invoke some RemoteCommands on the attached build slave

   * possibly produce a set of log files

   * finish with a status described by one of four values defined in
     buildbot.status.builder: SUCCESS, WARNINGS, FAILURE, SKIPPED

   * provide a list of short strings to describe the step

   * define a color (generally green, orange, or red) with which the
     step should be displayed

   More sophisticated steps may produce additional information and
provide it to later build steps, or store it in the factory to provide
to later builds.

* Menu:

* BuildFactory Attributes::
* Quick builds::


File: buildbot.info,  Node: BuildFactory,  Next: Process-Specific build factories,  Prev: BuildStep Objects,  Up: Build Factories

6.3.2 BuildFactory
------------------

The default `BuildFactory', provided in the
`buildbot.process.factory' module, contains an internal list of
"BuildStep specifications": a list of `(step_class, kwargs)' tuples
for each. These specification tuples are constructed when the config
file is read, by asking the instances passed to `addStep' for their
subclass and arguments.

   When asked to create a Build, the `BuildFactory' puts a copy of
the list of step specifications into the new Build object. When the
Build is actually started, these step specifications are used to
create the actual set of BuildSteps, which are then executed one at a
time. This serves to give each Build an independent copy of each step.
For example, a build which consists of a CVS checkout followed by a
`make build' would be constructed as follows:

     from buildbot.steps import source, shell
     from buildbot.process import factory

     f = factory.BuildFactory()
     f.addStep(source.CVS(cvsroot=CVSROOT, cvsmodule="project", mode="update"))
     f.addStep(shell.Compile(command=["make", "build"]))

   (To support config files from buildbot-0.7.5 and earlier,
`addStep' also accepts the `f.addStep(shell.Compile,
command=["make","build"])' form, although its use is discouraged
because then the `Compile' step doesn't get to validate or complain
about its arguments until build time. The modern pass-by-instance
approach allows this validation to occur while the config file is
being loaded, where the admin has a better chance of noticing
problems).

   It is also possible to pass a list of steps into the
`BuildFactory' when it is created. Using `addStep' is usually
simpler, but there are cases where is is more convenient to create
the list of steps ahead of time.:

     from buildbot.steps import source, shell
     from buildbot.process import factory

     all_steps = [source.CVS(cvsroot=CVSROOT, cvsmodule="project", mode="update"),
                  shell.Compile(command=["make", "build"]),
                 ]
     f = factory.BuildFactory(all_steps)

   Each step can affect the build process in the following ways:

   * If the step's `haltOnFailure' attribute is True, then a failure
     in the step (i.e. if it completes with a result of FAILURE) will
     cause the whole build to be terminated immediately: no further
     steps will be executed, with the exception of steps with
     `alwaysRun' set to True. `haltOnFailure' is useful for setup
     steps upon which the rest of the build depends: if the CVS
     checkout or `./configure' process fails, there is no point in
     trying to compile or test the resulting tree.

   * If the step's `alwaysRun' attribute is True, then it will always
     be run, regardless of if previous steps have failed. This is
     useful for cleanup steps that should always be run to return the
     build directory or build slave into a good state.

   * If the `flunkOnFailure' or `flunkOnWarnings' flag is set, then a
     result of FAILURE or WARNINGS will mark the build as a whole as
     FAILED. However, the remaining steps will still be executed.
     This is appropriate for things like multiple testing steps: a
     failure in any one of them will indicate that the build has
     failed, however it is still useful to run them all to completion.

   * Similarly, if the `warnOnFailure' or `warnOnWarnings' flag is
     set, then a result of FAILURE or WARNINGS will mark the build as
     having WARNINGS, and the remaining steps will still be executed.
     This may be appropriate for certain kinds of optional build or
     test steps.  For example, a failure experienced while building
     documentation files should be made visible with a WARNINGS
     result but not be serious enough to warrant marking the whole
     build with a FAILURE.


   In addition, each Step produces its own results, may create
logfiles, etc. However only the flags described above have any effect
on the build as a whole.

   The pre-defined BuildSteps like `CVS' and `Compile' have
reasonably appropriate flags set on them already. For example, without
a source tree there is no point in continuing the build, so the `CVS'
class has the `haltOnFailure' flag set to True. Look in
`buildbot/steps/*.py' to see how the other Steps are marked.

   Each Step is created with an additional `workdir' argument that
indicates where its actions should take place. This is specified as a
subdirectory of the slave builder's base directory, with a default
value of `build'. This is only implemented as a step argument (as
opposed to simply being a part of the base directory) because the
CVS/SVN steps need to perform their checkouts from the parent
directory.

* Menu:

* BuildFactory Attributes::
* Quick builds::


File: buildbot.info,  Node: BuildFactory Attributes,  Next: Quick builds,  Prev: BuildFactory,  Up: BuildFactory

6.3.2.1 BuildFactory Attributes
...............................

Some attributes from the BuildFactory are copied into each Build.

`useProgress'
     (defaults to True): if True, the buildmaster keeps track of how
     long each step takes, so it can provide estimates of how long
     future builds will take. If builds are not expected to take a
     consistent amount of time (such as incremental builds in which a
     random set of files are recompiled or tested each time), this
     should be set to False to inhibit progress-tracking.



File: buildbot.info,  Node: Quick builds,  Prev: BuildFactory Attributes,  Up: BuildFactory

6.3.2.2 Quick builds
....................

The difference between a "full build" and a "quick build" is that
quick builds are generally done incrementally, starting with the tree
where the previous build was performed. That simply means that the
source-checkout step should be given a `mode='update'' flag, to do
the source update in-place.

   In addition to that, the `useProgress' flag should be set to
False. Incremental builds will (or at least the ought to) compile as
few files as necessary, so they will take an unpredictable amount of
time to run. Therefore it would be misleading to claim to predict how
long the build will take.


File: buildbot.info,  Node: Process-Specific build factories,  Prev: BuildFactory,  Up: Build Factories

6.3.3 Process-Specific build factories
--------------------------------------

Many projects use one of a few popular build frameworks to simplify
the creation and maintenance of Makefiles or other compilation
structures. Buildbot provides several pre-configured BuildFactory
subclasses which let you build these projects with a minimum of fuss.

* Menu:

* GNUAutoconf::
* CPAN::
* Python distutils::
* Python/Twisted/trial projects::


File: buildbot.info,  Node: GNUAutoconf,  Next: CPAN,  Prev: Process-Specific build factories,  Up: Process-Specific build factories

6.3.3.1 GNUAutoconf
...................

GNU Autoconf (http://www.gnu.org/software/autoconf/) is a software
portability tool, intended to make it possible to write programs in C
(and other languages) which will run on a variety of UNIX-like
systems. Most GNU software is built using autoconf. It is frequently
used in combination with GNU automake. These tools both encourage a
build process which usually looks like this:

     % CONFIG_ENV=foo ./configure --with-flags
     % make all
     % make check
     # make install

   (except of course the Buildbot always skips the `make install'
part).

   The Buildbot's `buildbot.process.factory.GNUAutoconf' factory is
designed to build projects which use GNU autoconf and/or automake. The
configuration environment variables, the configure flags, and command
lines used for the compile and test are all configurable, in general
the default values will be suitable.

   Example:

     # use the s() convenience function defined earlier
     f = factory.GNUAutoconf(source=s(step.SVN, svnurl=URL, mode="copy"),
                             flags=["--disable-nls"])

   Required Arguments:

`source'
     This argument must be a step specification tuple that provides a
     BuildStep to generate the source tree.

   Optional Arguments:

`configure'
     The command used to configure the tree. Defaults to
     `./configure'. Accepts either a string or a list of shell argv
     elements.

`configureEnv'
     The environment used for the initial configuration step. This
     accepts a dictionary which will be merged into the buildslave's
     normal environment. This is commonly used to provide things like
     `CFLAGS="-O2 -g"' (to turn off debug symbols during the compile).
     Defaults to an empty dictionary.

`configureFlags'
     A list of flags to be appended to the argument list of the
     configure command. This is commonly used to enable or disable
     specific features of the autoconf-controlled package, like
     `["--without-x"]' to disable windowing support. Defaults to an
     empty list.

`compile'
     this is a shell command or list of argv values which is used to
     actually compile the tree. It defaults to `make all'. If set to
     None, the compile step is skipped.

`test'
     this is a shell command or list of argv values which is used to
     run the tree's self-tests. It defaults to `make check'. If set to
     None, the test step is skipped.



File: buildbot.info,  Node: CPAN,  Next: Python distutils,  Prev: GNUAutoconf,  Up: Process-Specific build factories

6.3.3.2 CPAN
............

Most Perl modules available from the CPAN (http://www.cpan.org/)
archive use the `MakeMaker' module to provide configuration, build,
and test services. The standard build routine for these modules looks
like:

     % perl Makefile.PL
     % make
     % make test
     # make install

   (except again Buildbot skips the install step)

   Buildbot provides a `CPAN' factory to compile and test these
projects.

   Arguments:
`source'
     (required): A step specification tuple, like that used by
     GNUAutoconf.

`perl'
     A string which specifies the `perl' executable to use. Defaults
     to just `perl'.



File: buildbot.info,  Node: Python distutils,  Next: Python/Twisted/trial projects,  Prev: CPAN,  Up: Process-Specific build factories

6.3.3.3 Python distutils
........................

Most Python modules use the `distutils' package to provide
configuration and build services. The standard build process looks
like:

     % python ./setup.py build
     % python ./setup.py install

   Unfortunately, although Python provides a standard unit-test
framework named `unittest', to the best of my knowledge `distutils'
does not provide a standardized target to run such unit tests. (Please
let me know if I'm wrong, and I will update this factory.)

   The `Distutils' factory provides support for running the build
part of this process. It accepts the same `source=' parameter as the
other build factories.

   Arguments:
`source'
     (required): A step specification tuple, like that used by
     GNUAutoconf.

`python'
     A string which specifies the `python' executable to use. Defaults
     to just `python'.

`test'
     Provides a shell command which runs unit tests. This accepts
     either a string or a list. The default value is None, which
     disables the test step (since there is no common default command
     to run unit tests in distutils modules).



File: buildbot.info,  Node: Python/Twisted/trial projects,  Prev: Python distutils,  Up: Process-Specific build factories

6.3.3.4 Python/Twisted/trial projects
.....................................

Twisted provides a unit test tool named `trial' which provides a few
improvements over Python's built-in `unittest' module. Many python
projects which use Twisted for their networking or application
services also use trial for their unit tests. These modules are
usually built and tested with something like the following:

     % python ./setup.py build
     % PYTHONPATH=build/lib.linux-i686-2.3 trial -v PROJECTNAME.test
     % python ./setup.py install

   Unfortunately, the `build/lib' directory into which the
built/copied .py files are placed is actually architecture-dependent,
and I do not yet know of a simple way to calculate its value. For many
projects it is sufficient to import their libraries "in place" from
the tree's base directory (`PYTHONPATH=.').

   In addition, the PROJECTNAME value where the test files are
located is project-dependent: it is usually just the project's
top-level library directory, as common practice suggests the unit test
files are put in the `test' sub-module. This value cannot be guessed,
the `Trial' class must be told where to find the test files.

   The `Trial' class provides support for building and testing
projects which use distutils and trial. If the test module name is
specified, trial will be invoked. The library path used for testing
can also be set.

   One advantage of trial is that the Buildbot happens to know how to
parse trial output, letting it identify which tests passed and which
ones failed. The Buildbot can then provide fine-grained reports about
how many tests have failed, when individual tests fail when they had
been passing previously, etc.

   Another feature of trial is that you can give it a series of source
.py files, and it will search them for special `test-case-name' tags
that indicate which test cases provide coverage for that file.  Trial
can then run just the appropriate tests. This is useful for quick
builds, where you want to only run the test cases that cover the
changed functionality.

   Arguments:
`source'
     (required): A step specification tuple, like that used by
     GNUAutoconf.

`buildpython'
     A list (argv array) of strings which specifies the `python'
     executable to use when building the package. Defaults to just
     `['python']'. It may be useful to add flags here, to supress
     warnings during compilation of extension modules. This list is
     extended with `['./setup.py', 'build']' and then executed in a
     ShellCommand.

`testpath'
     Provides a directory to add to `PYTHONPATH' when running the unit
     tests, if tests are being run. Defaults to `.' to include the
     project files in-place. The generated build library is frequently
     architecture-dependent, but may simply be `build/lib' for
     pure-python modules.

`trialpython'
     Another list of strings used to build the command that actually
     runs trial. This is prepended to the contents of the `trial'
     argument below. It may be useful to add `-W' flags here to
     supress warnings that occur while tests are being run. Defaults
     to an empty list, meaning `trial' will be run without an explicit
     interpreter, which is generally what you want if you're using
     `/usr/bin/trial' instead of, say, the `./bin/trial' that lives
     in the Twisted source tree.

`trial'
     provides the name of the `trial' command. It is occasionally
     useful to use an alternate executable, such as `trial2.2' which
     might run the tests under an older version of Python. Defaults to
     `trial'.

`tests'
     Provides a module name or names which contain the unit tests for
     this project. Accepts a string, typically `PROJECTNAME.test', or
     a list of strings. Defaults to None, indicating that no tests
     should be run. You must either set this or `useTestCaseNames' to
     do anyting useful with the Trial factory.

`useTestCaseNames'
     Tells the Step to provide the names of all changed .py files to
     trial, so it can look for test-case-name tags and run just the
     matching test cases. Suitable for use in quick builds. Defaults
     to False.

`randomly'
     If `True', tells Trial (with the `--random=0' argument) to run
     the test cases in random order, which sometimes catches subtle
     inter-test dependency bugs. Defaults to `False'.

`recurse'
     If `True', tells Trial (with the `--recurse' argument) to look
     in all subdirectories for additional test cases. It isn't clear
     to me how this works, but it may be useful to deal with the
     unknown-PROJECTNAME problem described above, and is currently
     used in the Twisted buildbot to accomodate the fact that test
     cases are now distributed through multiple
     twisted.SUBPROJECT.test directories.


   Unless one of `trialModule' or `useTestCaseNames' are set, no
tests will be run.

   Some quick examples follow. Most of these examples assume that the
target python code (the "code under test") can be reached directly
from the root of the target tree, rather than being in a `lib/'
subdirectory.

     #  Trial(source, tests="toplevel.test") does:
     #   python ./setup.py build
     #   PYTHONPATH=. trial -to toplevel.test

     #  Trial(source, tests=["toplevel.test", "other.test"]) does:
     #   python ./setup.py build
     #   PYTHONPATH=. trial -to toplevel.test other.test

     #  Trial(source, useTestCaseNames=True) does:
     #   python ./setup.py build
     #   PYTHONPATH=. trial -to --testmodule=foo/bar.py..  (from Changes)

     #  Trial(source, buildpython=["python2.3", "-Wall"], tests="foo.tests"):
     #   python2.3 -Wall ./setup.py build
     #   PYTHONPATH=. trial -to foo.tests

     #  Trial(source, trialpython="python2.3", trial="/usr/bin/trial",
     #        tests="foo.tests") does:
     #   python2.3 -Wall ./setup.py build
     #   PYTHONPATH=. python2.3 /usr/bin/trial -to foo.tests

     # For running trial out of the tree being tested (only useful when the
     # tree being built is Twisted itself):
     #  Trial(source, trialpython=["python2.3", "-Wall"], trial="./bin/trial",
     #        tests="foo.tests") does:
     #   python2.3 -Wall ./setup.py build
     #   PYTHONPATH=. python2.3 -Wall ./bin/trial -to foo.tests

   If the output directory of `./setup.py build' is known, you can
pull the python code from the built location instead of the source
directories. This should be able to handle variations in where the
source comes from, as well as accomodating binary extension modules:

     # Trial(source,tests="toplevel.test",testpath='build/lib.linux-i686-2.3')
     # does:
     #  python ./setup.py build
     #  PYTHONPATH=build/lib.linux-i686-2.3 trial -to toplevel.test


File: buildbot.info,  Node: Status Delivery,  Next: Command-line tool,  Prev: Build Process,  Up: Top

7 Status Delivery
*****************

More details are available in the docstrings for each class, use a
command like `pydoc buildbot.status.html.WebStatus' to see them.
Most status delivery objects take a `categories=' argument, which can
contain a list of "category" names: in this case, it will only show
status for Builders that are in one of the named categories.

   (implementor's note: each of these objects should be a
service.MultiService which will be attached to the BuildMaster object
when the configuration is processed. They should use
`self.parent.getStatus()' to get access to the top-level IStatus
object, either inside `startService' or later. They may call
`status.subscribe()' in `startService' to receive notifications of
builder events, in which case they must define `builderAdded' and
related methods. See the docstrings in `buildbot/interfaces.py' for
full details.)

* Menu:

* WebStatus::
* MailNotifier::
* IRC Bot::
* PBListener::
* Writing New Status Plugins::


File: buildbot.info,  Node: WebStatus,  Next: MailNotifier,  Prev: Status Delivery,  Up: Status Delivery

7.1 WebStatus
=============

The `buildbot.status.html.WebStatus' status target runs a small web
server inside the buildmaster. You can point a browser at this web
server and retrieve information about every build the buildbot knows
about, as well as find out what the buildbot is currently working on.

   The first page you will see is the "Welcome Page", which contains
links to all the other useful pages. This page is simply served from
the `public_html/index.html' file in the buildmaster's base
directory, where it is created by the `buildbot create-master'
command along with the rest of the buildmaster.

   The most complex resource provided by `WebStatus' is the
"Waterfall Display", which shows a time-based chart of events. This
somewhat-busy display provides detailed information about all steps of
all recent builds, and provides hyperlinks to look at individual build
logs and source changes. By simply reloading this page on a regular
basis, you will see a complete description of everything the buildbot
is currently working on.

   There are also pages with more specialized information. For
example, there is a page which shows the last 20 builds performed by
the buildbot, one line each. Each line is a link to detailed
information about that build. By adding query arguments to the URL
used to reach this page, you can narrow the display to builds that
involved certain branches, or which ran on certain Builders. These
pages are described in great detail below.

   When the buildmaster is created, a subdirectory named
`public_html/' is created in its base directory. By default,
`WebStatus' will serve files from this directory: for example, when a
user points their browser at the buildbot's `WebStatus' URL, they
will see the contents of the `public_html/index.html' file. Likewise,
`public_html/robots.txt', `public_html/buildbot.css', and
`public_html/favicon.ico' are all useful things to have in there.
The first time a buildmaster is created, the `public_html' directory
is populated with some sample files, which you will probably want to
customize for your own project. These files are all static: the
buildbot does not modify them in any way as it serves them to HTTP
clients.

     from buildbot.status.html import WebStatus
     c['status'].append(WebStatus(8080))

   Note that the initial robots.txt file has Disallow lines for all of
the dynamically-generated buildbot pages, to discourage web spiders
and search engines from consuming a lot of CPU time as they crawl
through the entire history of your buildbot. If you are running the
buildbot behind a reverse proxy, you'll probably need to put the
robots.txt file somewhere else (at the top level of the parent web
server), and replace the URL prefixes in it with more suitable values.

   If you would like to use an alternative root directory, add the
`public_html=..' option to the `WebStatus' creation:

     c['status'].append(WebStatus(8080, public_html="/var/www/buildbot"))

   In addition, if you are familiar with twisted.web _Resource
Trees_, you can write code to add additional pages at places inside
this web space. Just use `webstatus.putChild' to place these
resources.

   The following section describes the special URLs and the status
views they provide.

* Menu:

* WebStatus Configuration Parameters::
* Buildbot Web Resources::
* XMLRPC server::
* HTML Waterfall::


File: buildbot.info,  Node: WebStatus Configuration Parameters,  Next: Buildbot Web Resources,  Prev: WebStatus,  Up: WebStatus

7.1.1 WebStatus Configuration Parameters
----------------------------------------

The most common way to run a `WebStatus' is on a regular TCP port. To
do this, just pass in the TCP port number when you create the
`WebStatus' instance; this is called the `http_port' argument:

     from buildbot.status.html import WebStatus
     c['status'].append(WebStatus(8080))

   The `http_port' argument is actually a "strports specification"
for the port that the web server should listen on. This can be a
simple port number, or a string like `tcp:8080:interface=127.0.0.1'
(to limit connections to the loopback interface, and therefore to
clients running on the same host)(1).

   If instead (or in addition) you provide the `distrib_port'
argument, a twisted.web distributed server will be started either on a
TCP port (if `distrib_port' is like `"tcp:12345"') or more likely on
a UNIX socket (if `distrib_port' is like `"unix:/path/to/socket"').

   The `distrib_port' option means that, on a host with a
suitably-configured twisted-web server, you do not need to consume a
separate TCP port for the buildmaster's status web page. When the web
server is constructed with `mktap web --user', URLs that point to
`http://host/~username/' are dispatched to a sub-server that is
listening on a UNIX socket at `~username/.twisted-web-pb'. On such a
system, it is convenient to create a dedicated `buildbot' user, then
set `distrib_port' to
`"unix:"+os.path.expanduser("~/.twistd-web-pb")'. This configuration
will make the HTML status page available at `http://host/~buildbot/'
. Suitable URL remapping can make it appear at
`http://host/buildbot/', and the right virtual host setup can even
place it at `http://buildbot.host/' .

   The other `WebStatus' argument is `allowForce'. If set to True,
then the web page will provide a "Force Build" button that allows
visitors to manually trigger builds. This is useful for developers to
re-run builds that have failed because of intermittent problems in
the test suite, or because of libraries that were not installed at
the time of the previous build. You may not wish to allow strangers
to cause a build to run: in that case, set this to False to remove
these buttons. The default value is False.

   ---------- Footnotes ----------

   (1) It may even be possible to provide SSL access by using a
specification like
`"ssl:12345:privateKey=mykey.pen:certKey=cert.pem"', but this is
completely untested