1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
E0409 00:24:55.875405434    3803 oauth2_credentials.cc:236]            oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A name=metadata.google.internal. is_balancer=0: Domain name not found {grpc_status:2, created_time:"2023-04-09T00:24:55.87538667+00:00"}

Running:  accelerate-launch --config_file=/kaggle/working/tpu_config.yaml /usr/local/lib/python3.8/site-packages/accelerate/test_utils/scripts/test_script.py
stderr: E0409 00:25:28.161483585    4186 oauth2_credentials.cc:236]            oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A name=metadata.google.internal. is_balancer=0: Domain name not found {grpc_status:2, created_time:"2023-04-09T00:25:28.161468221+00:00"}
stderr: WARNING:root:Unsupported nprocs (2), ignoring...
stderr: E0409 00:26:00.284388269    4841 oauth2_credentials.cc:236]            oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A name=metadata.google.internal. is_balancer=0: Domain name not found {grpc_status:2, created_time:"2023-04-09T00:26:00.284364729+00:00"}
stderr: E0409 00:26:00.311955347    4292 oauth2_credentials.cc:236]            oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A name=metadata.google.internal. is_balancer=0: Domain name not found {grpc_status:2, created_time:"2023-04-09T00:26:00.311937341+00:00"}
stderr: E0409 00:26:00.312476581    4886 oauth2_credentials.cc:236]            oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A name=metadata.google.internal. is_balancer=0: Domain name not found {created_time:"2023-04-09T00:26:00.312460117+00:00", grpc_status:2}
stderr: E0409 00:26:00.405514406    4961 oauth2_credentials.cc:236]            oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A name=metadata.google.internal. is_balancer=0: Domain name not found {grpc_status:2, created_time:"2023-04-09T00:26:00.405495675+00:00"}
stderr: 2023-04-09 00:26:04.548942: F tensorflow/tsl/platform/statusor.cc:33] Attempting to fetch value instead of handling error UNKNOWN: TPU initialization failed: open(/dev/accel0): Operation not permitted: Operation not permitted; Couldn't open device: /dev/accel0; Unable to create Node RegisterInterface for node 0, config: device_path:   "/dev/accel0" mode: KERNEL debug_data_directory: "" dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; could not create driver instance
stderr: https://symbolize.stripped_domain/r/?trace=7ff5a692dce1,7ff5a692dd5f,7ff45bfccbff,7ff45c2d0a26,7ff45c2b3a71,7ff45c2b5fe2,7ff5a68e334e&map=04ceea301ec570e6abcf4ef3f089f0fde6516664:7ff459078000-7ff46cacf5e0
stderr: *** SIGABRT received by PID 4292 (TID 4292) on cpu 59 from PID 4292; stack trace: ***
stderr: 2023-04-09 00:26:04.551719: F tensorflow/tsl/platform/statusor.cc:33] Attempting to fetch value instead of handling error UNKNOWN: TPU initialization failed: open(/dev/accel0): Operation not permitted: Operation not permitted; Couldn't open device: /dev/accel0; Unable to create Node RegisterInterface for node 0, config: device_path:   "/dev/accel0" mode: KERNEL debug_data_directory: "" dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; could not create driver instance
stderr: https://symbolize.stripped_domain/r/?trace=7fb4e13cdce1,7fb4e13cdd5f,7fb419089bff,7fb41938da26,7fb419370a71,7fb419372fe2,7fb4e138334e&map=04ceea301ec570e6abcf4ef3f089f0fde6516664:7fb416135000-7fb429b8c5e0
stderr: *** SIGABRT received by PID 4293 (TID 4293) on cpu 27 from PID 4293; stack trace: ***
stderr: 2023-04-09 00:26:04.552956: F tensorflow/tsl/platform/statusor.cc:33] Attempting to fetch value instead of handling error UNKNOWN: TPU initialization failed: open(/dev/accel0): Operation not permitted: Operation not permitted; Couldn't open device: /dev/accel0; Unable to create Node RegisterInterface for node 0, config: device_path:   "/dev/accel0" mode: KERNEL debug_data_directory: "" dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; could not create driver instance
stderr: PC: @     0x7ff5a692dce1  (unknown)  raise
stderr:     @     0x7ff45844ca1a       1152  (unknown)
stderr: https://symbolize.stripped_domain/r/?trace=7fd907a3fce1,7fd907a3fd5f,7fd7bd0dbbff,7fd7bd3dfa26,7fd7bd3c2a71,7fd7bd3c4fe2,7fd9079f534e&map=04ceea301ec570e6abcf4ef3f089f0fde6516664:7fd7ba187000-7fd7cdbde5e0
stderr: *** SIGABRT received by PID 4291 (TID 4291) on cpu 1 from PID 4291; stack trace: ***
stderr:     @     0x7ff5a692dd60  1684000720  (unknown)
stderr: PC: @     0x7fb4e13cdce1  (unknown)  raise
stderr:     @     0x7fb3a6f4ca1a       1152  (unknown)
stderr:     @     0x7fb4e13cdd60  (unknown)  (unknown)
stderr: PC: @     0x7fd907a3fce1  (unknown)  raise
stderr:     @     0x7fd7b955ba1a       1152  (unknown)
stderr:     @     0x7fd907a3fd60  1262356880  (unknown)
stderr:     @     0x7fb419089c00        400  tsl::internal_statusor::Helper::Crash()
stderr:     @     0x7ff45bfccc00        400  tsl::internal_statusor::Helper::Crash()
stderr:     @     0x7fd7bd0dbc00        400  tsl::internal_statusor::Helper::Crash()
stderr:     @     0x7ff45c2d0a27        768  xla::PjRtComputationClient::PjRtComputationClient()
stderr:     @     0x7fb41938da27        768  xla::PjRtComputationClient::PjRtComputationClient()
stderr:     @     0x7fd7bd3dfa27        768  xla::PjRtComputationClient::PjRtComputationClient()
stderr:     @     0x7fb419370a72       1440  xla::ComputationClient::Create()
stderr:     @     0x7fd7bd3c2a72       1440  xla::ComputationClient::Create()
stderr:     @     0x7ff45c2b3a72       1440  xla::ComputationClient::Create()
stderr:     @     0x7fb419372fe3         32  std::call_once<>()::{lambda()#2}::_FUN()
stderr:     @     0x7fb4e138334f  (unknown)  __pthread_once_slow
stderr: https://symbolize.stripped_domain/r/?trace=7fb4e13cdce1,7fb3a6f4ca19,7fb4e13cdd5f,7fb419089bff,7fb41938da26,7fb419370a71,7fb419372fe2,7fb4e138334e&map=04ceea301ec570e6abcf4ef3f089f0fde6516664:7fb416135000-7fb429b8c5e0,ceee8fa20ddf9c34af43f587221e91de:7fb39a024000-7fb3a7163840
stderr: E0409 00:26:04.722291    4293 coredump_hook.cc:414] RAW: Remote crash data gathering hook invoked.
stderr: E0409 00:26:04.722314    4293 client.cc:278] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
stderr: E0409 00:26:04.722317    4293 coredump_hook.cc:512] RAW: Sending fingerprint to remote end.
stderr: E0409 00:26:04.722323    4293 coredump_socket.cc:120] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
stderr: E0409 00:26:04.722332    4293 coredump_hook.cc:518] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
stderr: E0409 00:26:04.722335    4293 coredump_hook.cc:580] RAW: Dumping core locally.
stderr:     @     0x7fd7bd3c4fe3         32  std::call_once<>()::{lambda()#2}::_FUN()
stderr:     @     0x7ff45c2b5fe3         32  std::call_once<>()::{lambda()#2}::_FUN()
stderr:     @     0x7fd9079f534f  (unknown)  __pthread_once_slow
stderr: https://symbolize.stripped_domain/r/?trace=    @     0x7ff5a68e334f  (unknown)  __pthread_once_slow
stderr: https://symbolize.stripped_domain/r/?trace=7fd907a3fce1,7fd7b955ba19,7fd907a3fd5f,7ff5a692dce1,7fd7bd0dbbff,7ff45844ca19,7fd7bd3dfa26,7ff5a692dd5f,7fd7bd3c2a71,7ff45bfccbff,7fd7bd3c4fe2,7ff45c2d0a26,7fd9079f534e7ff45c2b3a71,&map=7ff45c2b5fe2,7ff5a68e334e&map=04ceea301ec570e6abcf4ef3f089f0fde6516664:7fd7ba187000-7fd7cdbde5e0,ceee8fa20ddf9c34af43f587221e91de:7fd7ac633000-7fd7b9772840
stderr: 04ceea301ec570e6abcf4ef3f089f0fde6516664:7ff459078000-7ff46cacf5e0,ceee8fa20ddf9c34af43f587221e91de:7ff44b524000-7ff458663840E0409 00:26:04.726018    4291 coredump_hook.cc:414] RAW: Remote crash data gathering hook invoked.
stderr: 
stderr: E0409 00:26:04.726032    4292 coredump_hook.cc:414] RAW: Remote crash data gathering hook invoked.
stderr: E0409 00:26:04.726046    4291 client.cc:278] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
stderr: E0409 00:26:04.726050    4291 coredump_hook.cc:512] RAW: Sending fingerprint to remote end.
stderr: E0409 00:26:04.726060    4292 client.cc:278] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec.
stderr: E0409 00:26:04.726064    4292 coredump_hook.cc:512] RAW: Sending fingerprint to remote end.
stderr: E0409 00:26:04.726057    4291 coredump_socket.cc:120] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
stderr: E0409 00:26:04.726068    4291 coredump_hook.cc:518] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
stderr: E0409 00:26:04.726071    4291 coredump_hook.cc:580] RAW: Dumping core locally.
stderr: E0409 00:26:04.726070    4292 coredump_socket.cc:120] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket
stderr: E0409 00:26:04.726080    4292 coredump_hook.cc:518] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
stderr: E0409 00:26:04.726083    4292 coredump_hook.cc:580] RAW: Dumping core locally.
stderr: E0409 00:26:43.560479    4292 process_state.cc:784] RAW: Raising signal 6 with default behavior
stderr: E0409 00:26:43.566289    4293 process_state.cc:784] RAW: Raising signal 6 with default behavior
stderr: E0409 00:26:43.567120    4291 process_state.cc:784] RAW: Raising signal 6 with default behavior
stderr: https://symbolize.stripped_domain/r/?trace=7fef20ac8174,7fef20b11d5f&map=
stderr: *** SIGTERM received by PID 4290 (TID 4290) on cpu 46 from PID 4045; stack trace: ***
stderr: PC: @     0x7fef20ac8174  (unknown)  do_futex_wait.constprop.0
stderr:     @     0x7fedd2630a1a       1152  (unknown)
stderr:     @     0x7fef20b11d60  (unknown)  (unknown)
stderr:     @ ... and at least 1 more frames
stderr: https://symbolize.stripped_domain/r/?trace=7fef20ac8174,7fedd2630a19,7fef20b11d5f&map=ceee8fa20ddf9c34af43f587221e91de:7fedc5708000-7fedd2847840
stderr: E0409 00:26:43.647800    4290 coredump_hook.cc:360] RAW: Remote crash gathering disabled for SIGTERM.
stderr: E0409 00:26:43.966904    4290 process_state.cc:784] RAW: Raising signal 15 with default behavior
stderr: ╭───────────────────── Traceback (most recent call last) ──────────────────────╮
stderr:  /usr/local/bin/accelerate-launch:8 in <module>                               
stderr:                                                                               
stderr:    5 from accelerate.commands.launch import main                              
stderr:    6 if __name__ == '__main__':                                               
stderr:    7    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     
stderr:   8    sys.exit(main())                                                     
stderr:    9                                                                          
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/site-packages/accelerate/commands/launch.py:929 in  
stderr:  main                                                                         
stderr:                                                                               
stderr:    926 def main():                                                            
stderr:    927    parser = launch_command_parser()                                   
stderr:    928    args = parser.parse_args()                                         
stderr:   929    launch_command(args)                                               
stderr:    930                                                                        
stderr:    931                                                                        
stderr:    932 if __name__ == "__main__":                                             
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/site-packages/accelerate/commands/launch.py:919 in  
stderr:  launch_command                                                               
stderr:                                                                               
stderr:    916       if args.tpu_use_cluster:                                       
stderr:    917          tpu_pod_launcher(args)                                     
stderr:    918       else:                                                          
stderr:   919          tpu_launcher(args)                                         
stderr:    920    elif defaults is not None and defaults.compute_environment == Comp 
stderr:    921       sagemaker_launcher(defaults, args)                             
stderr:    922    else:                                                              
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/site-packages/accelerate/commands/launch.py:685 in  
stderr:  tpu_launcher                                                                 
stderr:                                                                               
stderr:    682                                                                       
stderr:    683    main_function = getattr(mod, args.main_training_function)          
stderr:    684    with patch_environment(**current_env):                             
stderr:   685       xmp.spawn(PrepareForLaunch(main_function), args=(), nprocs=arg 
stderr:    686                                                                        
stderr:    687                                                                        
stderr:    688 def tpu_pod_launcher(args):                                            
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/site-packages/torch_xla/distributed/xla_multiproces 
stderr:  sing.py:386 in spawn                                                         
stderr:                                                                               
stderr:    383    return None.                                                       
stderr:    384   """                                                                  │
stderr: │   385   if pjrt.using_pjrt():                                                │
stderr: │ ❱ 386 │   return pjrt.spawn(fn, nprocs, start_method, args)                  │
stderr: │   387                                                                        │
stderr: │   388   if not _is_xla_config():                                             │
stderr: │   389 │   # If this is not an XLA setup, jump to normal multi-processing.    │
stderr: │                                                                              │
stderr: │ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:365 in │
stderr: │ spawn                                                                        │
stderr: │                                                                              │
stderr: │   362   elif nprocs is not None:                                             │
stderr: │   363 │   logging.warning('Unsupported nprocs (%d), ignoring...' % nprocs)   │
stderr: │   364                                                                        │
stderr: │ ❱ 365   _run_multiprocess(spawn_fn, start_method=start_method)               │
stderr: │   366                                                                        │
stderr: │   367                                                                        │
stderr: │   368 @requires_pjrt                                                         │
stderr: │                                                                              │
stderr: │ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:92 in  │
stderr: │ wrapper                                                                      │
stderr: │                                                                              │
stderr: │    89 │     raise NotImplementedError('`{}` not implemented for XRT'.format( │
stderr: │    90 │   │     fn.__name__))                                                │
stderr: │    91 │                                                                      │
stderr: │ ❱  92 │   return fn(*args, **kwargs)                                         │
stderr: │    93                                                                        │
stderr: │    94   return wrapper                                                       │
stderr: │    95                                                                        │
stderr: │                                                                              │
stderr: │ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:322 in │
stderr: │ _run_multiprocess                                                            │
stderr: │                                                                              │
stderr: │   319 │   │   fn=functools.partial(fn, *args, **kwargs),                     │
stderr: │   320 │   │   initializer_fn=_initialize_multiprocess)                       │
stderr: │   321 │   process_results = executor.map(mp_fn, range(num_processes))        │
stderr: │ ❱ 322 │   replica_results = list(                                            │
stderr: │   323 │   │   itertools.chain.from_iterable(                                 │
stderr: │   324 │   │   │   result.items() for result in process_results))             │
stderr: │   325                                                                        │
stderr: │                                                                              │
stderr: │ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:323 in │
stderr: │ <genexpr>                                                                    │
stderr: │                                                                              │
stderr: │   320 │   │   initializer_fn=_initialize_multiprocess)                       │
stderr: │   321 │   process_results = executor.map(mp_fn, range(num_processes))        │
stderr: │   322 │   replica_results = list(                                            │
stderr: │ ❱ 323 │   │   itertools.chain.from_iterable(                                 │
stderr: │   324 │   │   │   result.items() for result in process_results))             │
stderr: │   325                                                                        │
stderr: │   326   if device_type() == 'GPU':                                           │
stderr: │                                                                              │
stderr: │ /usr/local/lib/python3.8/concurrent/futures/process.py:484 in                │
stderr: │ _chain_from_iterable_of_lists                                                │
stderr: │                                                                              │
stderr: │   481 │   Each item in *iterable* should be a list.  This function is        │
stderr: │   482 │   careful not to keep references to yielded objects.                 │
stderr: │   483 │   """                                                                
stderr:   484    for element in iterable:                                           
stderr:    485       element.reverse()                                              
stderr:    486       while element:                                                 
stderr:    487          yield element.pop()                                        
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/concurrent/futures/_base.py:619 in result_iterator  
stderr:                                                                               
stderr:    616             while fs:                                              
stderr:    617                # Careful not to keep a reference to the popped fu │
stderr:    618                if timeout is None:                                
stderr:   619                   yield fs.pop().result()                        
stderr:    620                else:                                              
stderr:    621                   yield fs.pop().result(end_time - time.monotoni 
stderr:    622          finally:                                                   
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/concurrent/futures/_base.py:444 in result           
stderr:                                                                               
stderr:    441             if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: 
stderr:    442                raise CancelledError()                             
stderr:    443             elif self._state == FINISHED:                          
stderr:   444                return self.__get_result()                         
stderr:    445             else:                                                  
stderr:    446                raise TimeoutError()                               
stderr:    447       finally:                                                       
stderr:                                                                               
stderr:  /usr/local/lib/python3.8/concurrent/futures/_base.py:389 in __get_result     
stderr:                                                                               
stderr:    386    def __get_result(self):                                            
stderr:    387       if self._exception:                                            
stderr:    388          try:                                                       
stderr:   389             raise self._exception                                  
stderr:    390          finally:                                                   
stderr:    391             # Break a reference cycle with the exception in self._ │
stderr:    392             self = None                                            
stderr: ╰──────────────────────────────────────────────────────────────────────────────╯
stderr: BrokenProcessPool: A process in the process pool was terminated abruptly while
stderr: the future was running or pending.
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
 /usr/local/bin/accelerate:8 in <module>                                      
                                                                              
   5 from accelerate.commands.accelerate_cli import main                      
   6 if __name__ == '__main__':                                               
   7    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     
  8    sys.exit(main())                                                     
   9                                                                          
                                                                              
 /usr/local/lib/python3.8/site-packages/accelerate/commands/accelerate_cli.py 
 :45 in main                                                                  
                                                                              
   42       exit(1)                                                         
   43                                                                        
   44    # Run                                                               │
  45    args.func(args)                                                     
   46                                                                         
   47                                                                         
   48 if __name__ == "__main__":                                              
                                                                              
 /usr/local/lib/python3.8/site-packages/accelerate/commands/test.py:54 in     
 test_command                                                                 
                                                                              
   51       test_args = f"--config_file={args.config_file} {script_name}"   
   52                                                                        
   53    cmd = ["accelerate-launch"] + test_args.split()                     
  54    result = execute_subprocess_async(cmd, env=os.environ.copy())       
   55    if result.returncode == 0:                                          
   56       print("Test is a success! You are ready for your distributed tr │
   57                                                                         
                                                                              
 /usr/local/lib/python3.8/site-packages/accelerate/test_utils/testing.py:359  
 in execute_subprocess_async                                                  
                                                                              
   356    cmd_str = " ".join(cmd)                                            
   357    if result.returncode > 0:                                          
   358       stderr = "\n".join(result.stderr)                              
  359       raise RuntimeError(                                            
   360          f"'{cmd_str}' failed with returncode {result.returncode}\n
   361          f"The combined stderr from workers follows:\n{stderr}"     
   362       )                                                              
╰──────────────────────────────────────────────────────────────────────────────╯
RuntimeError: 'accelerate-launch --config_file=/kaggle/working/tpu_config.yaml 
/usr/local/lib/python3.8/site-packages/accelerate/test_utils/scripts/test_script
.py' failed with returncode 1

The combined stderr from workers follows:
E0409 00:25:28.161483585    4186 oauth2_credentials.cc:236]            
oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A 
name=metadata.google.internal. is_balancer=0: Domain name not found 
{grpc_status:2, created_time:"2023-04-09T00:25:28.161468221+00:00"}
WARNING:root:Unsupported nprocs (2), ignoring...
E0409 00:26:00.284388269    4841 oauth2_credentials.cc:236]            
oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A 
name=metadata.google.internal. is_balancer=0: Domain name not found 
{grpc_status:2, created_time:"2023-04-09T00:26:00.284364729+00:00"}
E0409 00:26:00.311955347    4292 oauth2_credentials.cc:236]            
oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A 
name=metadata.google.internal. is_balancer=0: Domain name not found 
{grpc_status:2, created_time:"2023-04-09T00:26:00.311937341+00:00"}
E0409 00:26:00.312476581    4886 oauth2_credentials.cc:236]            
oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A 
name=metadata.google.internal. is_balancer=0: Domain name not found 
{created_time:"2023-04-09T00:26:00.312460117+00:00", grpc_status:2}
E0409 00:26:00.405514406    4961 oauth2_credentials.cc:236]            
oauth_fetch: UNKNOWN:C-ares status is not ARES_SUCCESS qtype=A 
name=metadata.google.internal. is_balancer=0: Domain name not found 
{grpc_status:2, created_time:"2023-04-09T00:26:00.405495675+00:00"}
2023-04-09 00:26:04.548942: F tensorflow/tsl/platform/statusor.cc:33] Attempting
to fetch value instead of handling error UNKNOWN: TPU initialization failed: 
open(/dev/accel0): Operation not permitted: Operation not permitted; Couldn't 
open device: /dev/accel0; Unable to create Node RegisterInterface for node 0, 
config: device_path:   "/dev/accel0" mode: KERNEL debug_data_directory: "" 
dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; 
could not create driver instance
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7ff5a692dce1,7ff5a692dd5f,7ff45bfccbf
f,7ff45c2d0a26,7ff45c2b3a71,7ff45c2b5fe2,7ff5a68e334e&map=04ceea301ec570e6abcf4e
f3f089f0fde6516664:7ff459078000-7ff46cacf5e0
*** SIGABRT received by PID 4292 (TID 4292) on cpu 59 from PID 4292; stack 
trace: ***
2023-04-09 00:26:04.551719: F tensorflow/tsl/platform/statusor.cc:33] Attempting
to fetch value instead of handling error UNKNOWN: TPU initialization failed: 
open(/dev/accel0): Operation not permitted: Operation not permitted; Couldn't 
open device: /dev/accel0; Unable to create Node RegisterInterface for node 0, 
config: device_path:   "/dev/accel0" mode: KERNEL debug_data_directory: "" 
dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; 
could not create driver instance
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7fb4e13cdce1,7fb4e13cdd5f,7fb419089bf
f,7fb41938da26,7fb419370a71,7fb419372fe2,7fb4e138334e&map=04ceea301ec570e6abcf4e
f3f089f0fde6516664:7fb416135000-7fb429b8c5e0
*** SIGABRT received by PID 4293 (TID 4293) on cpu 27 from PID 4293; stack 
trace: ***
2023-04-09 00:26:04.552956: F tensorflow/tsl/platform/statusor.cc:33] Attempting
to fetch value instead of handling error UNKNOWN: TPU initialization failed: 
open(/dev/accel0): Operation not permitted: Operation not permitted; Couldn't 
open device: /dev/accel0; Unable to create Node RegisterInterface for node 0, 
config: device_path:   "/dev/accel0" mode: KERNEL debug_data_directory: "" 
dump_anomalies_only: true crash_in_debug_dump: false allow_core_dump: true; 
could not create driver instance
PC: @     0x7ff5a692dce1  (unknown)  raise
    @     0x7ff45844ca1a       1152  (unknown)
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7fd907a3fce1,7fd907a3fd5f,7fd7bd0dbbf
f,7fd7bd3dfa26,7fd7bd3c2a71,7fd7bd3c4fe2,7fd9079f534e&map=04ceea301ec570e6abcf4e
f3f089f0fde6516664:7fd7ba187000-7fd7cdbde5e0
*** SIGABRT received by PID 4291 (TID 4291) on cpu 1 from PID 4291; stack trace:
***
    @     0x7ff5a692dd60  1684000720  (unknown)
PC: @     0x7fb4e13cdce1  (unknown)  raise
    @     0x7fb3a6f4ca1a       1152  (unknown)
    @     0x7fb4e13cdd60  (unknown)  (unknown)
PC: @     0x7fd907a3fce1  (unknown)  raise
    @     0x7fd7b955ba1a       1152  (unknown)
    @     0x7fd907a3fd60  1262356880  (unknown)
    @     0x7fb419089c00        400  tsl::internal_statusor::Helper::Crash()
    @     0x7ff45bfccc00        400  tsl::internal_statusor::Helper::Crash()
    @     0x7fd7bd0dbc00        400  tsl::internal_statusor::Helper::Crash()
    @     0x7ff45c2d0a27        768  
xla::PjRtComputationClient::PjRtComputationClient()
    @     0x7fb41938da27        768  
xla::PjRtComputationClient::PjRtComputationClient()
    @     0x7fd7bd3dfa27        768  
xla::PjRtComputationClient::PjRtComputationClient()
    @     0x7fb419370a72       1440  xla::ComputationClient::Create()
    @     0x7fd7bd3c2a72       1440  xla::ComputationClient::Create()
    @     0x7ff45c2b3a72       1440  xla::ComputationClient::Create()
    @     0x7fb419372fe3         32  std::call_once<>()::{lambda()#2}::_FUN()
    @     0x7fb4e138334f  (unknown)  __pthread_once_slow
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7fb4e13cdce1,7fb3a6f4ca19,7fb4e13cdd5
f,7fb419089bff,7fb41938da26,7fb419370a71,7fb419372fe2,7fb4e138334e&map=04ceea301
ec570e6abcf4ef3f089f0fde6516664:7fb416135000-7fb429b8c5e0,ceee8fa20ddf9c34af43f5
87221e91de:7fb39a024000-7fb3a7163840
E0409 00:26:04.722291    4293 coredump_hook.cc:414] RAW: Remote crash data 
gathering hook invoked.
E0409 00:26:04.722314    4293 client.cc:278] RAW: Coroner client retries enabled
(b/136286901), will retry for up to 30 sec.
E0409 00:26:04.722317    4293 coredump_hook.cc:512] RAW: Sending fingerprint to 
remote end.
E0409 00:26:04.722323    4293 coredump_socket.cc:120] RAW: Stat failed errno=2 
on socket /var/google/services/logmanagerd/remote_coredump.socket
E0409 00:26:04.722332    4293 coredump_hook.cc:518] RAW: Cannot send fingerprint
to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0409 00:26:04.722335    4293 coredump_hook.cc:580] RAW: Dumping core locally.
    @     0x7fd7bd3c4fe3         32  std::call_once<>()::{lambda()#2}::_FUN()
    @     0x7ff45c2b5fe3         32  std::call_once<>()::{lambda()#2}::_FUN()
    @     0x7fd9079f534f  (unknown)  __pthread_once_slow
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=    @     0x7ff5a68e334f  (unknown)  
__pthread_once_slow
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7fd907a3fce1,7fd7b955ba19,7fd907a3fd5
f,7ff5a692dce1,7fd7bd0dbbff,7ff45844ca19,7fd7bd3dfa26,7ff5a692dd5f,7fd7bd3c2a71,
7ff45bfccbff,7fd7bd3c4fe2,7ff45c2d0a26,7fd9079f534e7ff45c2b3a71,&map=7ff45c2b5fe
2,7ff5a68e334e&map=04ceea301ec570e6abcf4ef3f089f0fde6516664:7fd7ba187000-7fd7cdb
de5e0,ceee8fa20ddf9c34af43f587221e91de:7fd7ac633000-7fd7b9772840
04ceea301ec570e6abcf4ef3f089f0fde6516664:7ff459078000-7ff46cacf5e0,ceee8fa20ddf9
c34af43f587221e91de:7ff44b524000-7ff458663840E0409 00:26:04.726018    4291 
coredump_hook.cc:414] RAW: Remote crash data gathering hook invoked.

E0409 00:26:04.726032    4292 coredump_hook.cc:414] RAW: Remote crash data 
gathering hook invoked.
E0409 00:26:04.726046    4291 client.cc:278] RAW: Coroner client retries enabled
(b/136286901), will retry for up to 30 sec.
E0409 00:26:04.726050    4291 coredump_hook.cc:512] RAW: Sending fingerprint to 
remote end.
E0409 00:26:04.726060    4292 client.cc:278] RAW: Coroner client retries enabled
(b/136286901), will retry for up to 30 sec.
E0409 00:26:04.726064    4292 coredump_hook.cc:512] RAW: Sending fingerprint to 
remote end.
E0409 00:26:04.726057    4291 coredump_socket.cc:120] RAW: Stat failed errno=2 
on socket /var/google/services/logmanagerd/remote_coredump.socket
E0409 00:26:04.726068    4291 coredump_hook.cc:518] RAW: Cannot send fingerprint
to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0409 00:26:04.726071    4291 coredump_hook.cc:580] RAW: Dumping core locally.
E0409 00:26:04.726070    4292 coredump_socket.cc:120] RAW: Stat failed errno=2 
on socket /var/google/services/logmanagerd/remote_coredump.socket
E0409 00:26:04.726080    4292 coredump_hook.cc:518] RAW: Cannot send fingerprint
to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running?
E0409 00:26:04.726083    4292 coredump_hook.cc:580] RAW: Dumping core locally.
E0409 00:26:43.560479    4292 process_state.cc:784] RAW: Raising signal 6 with 
default behavior
E0409 00:26:43.566289    4293 process_state.cc:784] RAW: Raising signal 6 with 
default behavior
E0409 00:26:43.567120    4291 process_state.cc:784] RAW: Raising signal 6 with 
default behavior
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7fef20ac8174,7fef20b11d5f&map=
*** SIGTERM received by PID 4290 (TID 4290) on cpu 46 from PID 4045; stack 
trace: ***
PC: @     0x7fef20ac8174  (unknown)  do_futex_wait.constprop.0
    @     0x7fedd2630a1a       1152  (unknown)
    @     0x7fef20b11d60  (unknown)  (unknown)
    @ ... and at least 1 more frames
https://symbolize.stripped_domain/r/? class="ansi-blue-intense-fg ansi-underline">trace=7fef20ac8174,7fedd2630a19,7fef20b11d5
f&map=ceee8fa20ddf9c34af43f587221e91de:7fedc5708000-7fedd2847840
E0409 00:26:43.647800    4290 coredump_hook.cc:360] RAW: Remote crash gathering 
disabled for SIGTERM.
E0409 00:26:43.966904    4290 process_state.cc:784] RAW: Raising signal 15 with 
default behavior
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
 /usr/local/bin/accelerate-launch:8 in <module>                               
                                                                              
   5 from accelerate.commands.launch import main                              
   6 if __name__ == '__main__':                                               
   7    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])     
  8    sys.exit(main())                                                     
   9                                                                          
                                                                              
 /usr/local/lib/python3.8/site-packages/accelerate/commands/launch.py:929 in  
 main                                                                         
                                                                              
   926 def main():                                                            
   927    parser = launch_command_parser()                                   
   928    args = parser.parse_args()                                         
  929    launch_command(args)                                               
   930                                                                        
   931                                                                        
   932 if __name__ == "__main__":                                             
                                                                              
 /usr/local/lib/python3.8/site-packages/accelerate/commands/launch.py:919 in  
 launch_command                                                               
                                                                              
   916       if args.tpu_use_cluster:                                       
   917          tpu_pod_launcher(args)                                     
   918       else:                                                          
  919          tpu_launcher(args)                                         
   920    elif defaults is not None and defaults.compute_environment == Comp 
   921       sagemaker_launcher(defaults, args)                             
   922    else:                                                              
                                                                              
 /usr/local/lib/python3.8/site-packages/accelerate/commands/launch.py:685 in  
 tpu_launcher                                                                 
                                                                              
   682                                                                       
   683    main_function = getattr(mod, args.main_training_function)          
   684    with patch_environment(**current_env):                             
  685       xmp.spawn(PrepareForLaunch(main_function), args=(), nprocs=arg 
   686                                                                        
   687                                                                        
   688 def tpu_pod_launcher(args):                                            
                                                                              
 /usr/local/lib/python3.8/site-packages/torch_xla/distributed/xla_multiproces 
 sing.py:386 in spawn                                                         
                                                                              
   383    return None.                                                       
   384   """                                                                  │
│   385   if pjrt.using_pjrt():                                                │
│ ❱ 386 │   return pjrt.spawn(fn, nprocs, start_method, args)                  │
│   387                                                                        │
│   388   if not _is_xla_config():                                             │
│   389 │   # If this is not an XLA setup, jump to normal multi-processing.    │
│                                                                              │
│ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:365 in │
│ spawn                                                                        │
│                                                                              │
│   362   elif nprocs is not None:                                             │
│   363 │   logging.warning('Unsupported nprocs (%d), ignoring...' % nprocs)   │
│   364                                                                        │
│ ❱ 365   _run_multiprocess(spawn_fn, start_method=start_method)               │
│   366                                                                        │
│   367                                                                        │
│   368 @requires_pjrt                                                         │
│                                                                              │
│ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:92 in  │
│ wrapper                                                                      │
│                                                                              │
│    89 │     raise NotImplementedError('`{}` not implemented for XRT'.format( │
│    90 │   │     fn.__name__))                                                │
│    91 │                                                                      │
│ ❱  92 │   return fn(*args, **kwargs)                                         │
│    93                                                                        │
│    94   return wrapper                                                       │
│    95                                                                        │
│                                                                              │
│ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:322 in │
│ _run_multiprocess                                                            │
│                                                                              │
│   319 │   │   fn=functools.partial(fn, *args, **kwargs),                     │
│   320 │   │   initializer_fn=_initialize_multiprocess)                       │
│   321 │   process_results = executor.map(mp_fn, range(num_processes))        │
│ ❱ 322 │   replica_results = list(                                            │
│   323 │   │   itertools.chain.from_iterable(                                 │
│   324 │   │   │   result.items() for result in process_results))             │
│   325                                                                        │
│                                                                              │
│ /usr/local/lib/python3.8/site-packages/torch_xla/experimental/pjrt.py:323 in │
│ <genexpr>                                                                    │
│                                                                              │
│   320 │   │   initializer_fn=_initialize_multiprocess)                       │
│   321 │   process_results = executor.map(mp_fn, range(num_processes))        │
│   322 │   replica_results = list(                                            │
│ ❱ 323 │   │   itertools.chain.from_iterable(                                 │
│   324 │   │   │   result.items() for result in process_results))             │
│   325                                                                        │
│   326   if device_type() == 'GPU':                                           │
│                                                                              │
│ /usr/local/lib/python3.8/concurrent/futures/process.py:484 in                │
│ _chain_from_iterable_of_lists                                                │
│                                                                              │
│   481 │   Each item in *iterable* should be a list.  This function is        │
│   482 │   careful not to keep references to yielded objects.                 │
│   483 │   """                                                                
  484    for element in iterable:                                           
   485       element.reverse()                                              
   486       while element:                                                 
   487          yield element.pop()                                        
                                                                              
 /usr/local/lib/python3.8/concurrent/futures/_base.py:619 in result_iterator  
                                                                              
   616             while fs:                                              
   617                # Careful not to keep a reference to the popped fu │
   618                if timeout is None:                                
  619                   yield fs.pop().result()                        
   620                else:                                              
   621                   yield fs.pop().result(end_time - time.monotoni 
   622          finally:                                                   
                                                                              
 /usr/local/lib/python3.8/concurrent/futures/_base.py:444 in result           
                                                                              
   441             if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: 
   442                raise CancelledError()                             
   443             elif self._state == FINISHED:                          
  444                return self.__get_result()                         
   445             else:                                                  
   446                raise TimeoutError()                               
   447       finally:                                                       
                                                                              
 /usr/local/lib/python3.8/concurrent/futures/_base.py:389 in __get_result     
                                                                              
   386    def __get_result(self):                                            
   387       if self._exception:                                            
   388          try:                                                       
  389             raise self._exception                                  
   390          finally:                                                   
   391             # Break a reference cycle with the exception in self._ │
   392             self = None                                            
╰──────────────────────────────────────────────────────────────────────────────╯
BrokenProcessPool: A process in the process pool was terminated abruptly while
the future was running or pending.
Edit
Pub: 09 Apr 2023 00:31 UTC
Views: 89