(V0.12.25, Windows 10 Home 21H1 64-bit, Python 3.8)
(Oh, now I have a similarity problem with the version of this that was posted in the Show the Community by mistake and then deleted…)
I’ve just tried to build and run the BiT demo
Two comments: one extra advice to anyone else doing the same, and a bug.
Advice, don’t cut corners! Do this first
pip install tensorflow_hub
then either copy/paste the whole code at once, or if editing line by line, keep this order (how I ended up doing the import second isn’t really relevant!)
-
import tensorflow_hub as hub
, then input_=hub.KerasLayer("https://tfhub.dev/google/bit/s-r50x1/ilsvrc2012_classification/1")(input_)
It did not work for me in the other order: I suspect an interaction between parsing the code and cacheing in the 0.12.25 version of PL (IIRC there is a bug in the editability of custom components such that only 1st edit is accepted).
Attempts to rebuild the model after an initial simple error by me failed unless the edit order was as given above and it was also done in an incognito window where there was now no cache hit on the (?) component name (?) (the rebuilt model had a new name). Fortunately, I think the cache problem will be gone in the next release.
If you get it right first time (pip install, then copy/paste all code at once) it will probably be fine in this respect.
BUG
When I did manage to rebuild and run there were no errors in the LR panel under Errors, but this cropped up in a dialog
Traceback (most recent call last):
File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\site-packages\flask\app.py", line 1513, in full_dispatch_request
rv = self.dispatch_request()
File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\site-packages\flask\app.py", line 1499, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\site-packages\flask\views.py", line 83, in view
return self.dispatch_request(*args, **kwargs)
File "perceptilabs\endpoints\session\base.py", line 76, in perceptilabs.endpoints.session.base.SessionProxy.dispatch_request
File "perceptilabs\endpoints\session\threaded_executor.py", line 154, in perceptilabs.endpoints.session.threaded_executor.ThreadedExecutor.send_request
File "perceptilabs\endpoints\session\threaded_executor.py", line 175, in perceptilabs.endpoints.session.threaded_executor.ThreadedExecutor.get_task_info
File "perceptilabs\endpoints\session\threaded_executor.py", line 81, in perceptilabs.endpoints.session.threaded_executor.TaskCache.get
File "perceptilabs\endpoints\session\threaded_executor.py", line 91, in perceptilabs.endpoints.session.threaded_executor.TaskCache.get
File "perceptilabs\endpoints\session\threaded_executor.py", line 125, in perceptilabs.endpoints.session.threaded_executor.ThreadedExecutor.start_task.run_task
File "perceptilabs\endpoints\session\utils.py", line 114, in perceptilabs.endpoints.session.utils.run_kernel
File "c:\users\julian\anaconda3\envs\pl_tf250_py3810\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "perceptilabs\endpoints\session\utils.py", line 98, in run
RuntimeError: Task failed!
Any ideas about the bug?