Hey,
To unload my laptop i seek to run perceptilabs as a service, which has proven cumbersome.
are there guides on how to do this ?
Hey,
To unload my laptop i seek to run perceptilabs as a service, which has proven cumbersome.
are there guides on how to do this ?
Hi @JLambrecht
Always nice to see another new face here! Iāve been using PL for most of this year now - maybe I can assistā¦
Can you clarify what you would like to achieve to āunloadā your laptop and run PL as a service? Whatās your setup?
hey Julian,
Thanks for the warm welcome.
My laptop is typically running multiple browser with many tabs open en multiple applications where i pivot through to work on various topics. My work has me running a handful of services which i found overload the CPU to 100% on all cores for considerable time when running updates, indexing etc.
To this end iād like to run PL on a separate VM on the local server where iāve migrated multiple services to already.
When starting PL iād like to be able to pin the port it is running on for example, now it just tries and opens the first port it can open. Iād also need to better understand what services are running so i can configured the reverse proxy to handle it cleanly.
br,
Joris
Hi Joris
Ah, I see.
Hopefully @robertl will drop by to provide some info on PerceptiLabs docker images, that could be a nice simple way to run it on your server. Alternatively, you could setup up your own VM with python etc. The only thing is that training will be CPU only, but if the local server is on all the time and it doesnāt affect other workloads thatās probably fine - I donāt know of any CUDA access in VMs (yet).
NB Is that a Windows or a Linux server?
What I donāt know is how best to access the PL server running in your VM⦠I guess youād just like to be able to see it in your browser, but I suppose you could also use a remote desktop to the VM and use a browser within itā¦
I think PL uses a fixed port⦠hence the potential for conflict sometimes.
So, despite my fine intent I donāt seem to have been very helpful at all - I expect Iāll learn as much form Robertās input as you will!
Thansk Julian
i donāt do Docker though, i loath it as it introduces a lot of unknowns and unpredictables.
It is a Linux server, iāve installed using python so far. I think i have it working once i get how to configure the startup parameters, i could not find this documented.
What i do is run a reverse proxy setup so it becomes: browser > reverse-proxy > localhost of VM
the challange here is to run PL in the python virtual environmen/sandbox and activate it so it runs on a number of fixed ports.
wrt the CPU/GPU, i own a powerful AMD GPU and an old Nvidia GPU, i donāt get why developers insist on making code GPU specific these days, there are standards out there and tools to migrate stuff
afaik PL will start on one port, restart on a port+1 if the port was taken or something similar, it is what i observed
When it comes to PL iāll have plenty of questions. My personal goal is to use it for text data analysis.
Iāll definitely be learning from you re setups
I didnāt know there was any AMD support for TF⦠but suspected you knew something I didnāt: indeed there is now ROCm from AMD apparently. (medium article here).
And thanks for the info about PL port usage - that must be a recent change I was unaware of. Thanks for sharing that.
But re modelling⦠I have diverse interests too⦠I wanted to see whether I could get a nn to write like me (I have 320k word corpus in single style to train on) and also wanted to mine that text for other interesting info (structure, sentiment, ā¦)
I look forward to hearing more about your objectives sometime.
Thanks.
In summary my objectives are text file analytics based on pre-defined keywords for which i hope a NN can learn to see the patterns i see or at least hint at them with high confidence.
Hi @JLambrecht,
Welcome to the forum!
As @JulianSMoore mentioned, we have a Docker version which is the standard way of running PL on a remote server, but if thatās not great for you we can look at some other alternatives as well (including our Cloud version thatās coming up towards the end of this year).
If you are interested, we can hop on a call sometime this week and talk them through (and I would love to hear more about your usecase as well)
Best,
Robert
Hey Robert
Thanks, not Docker not Cloud for me.
Itās not much iām asking, iām asking to learn how to configure a fixed port for perceptilabs to run on.
At the moment, we unfortunately donāt have any way to manually set the ports in the local python version.
The python version is more of a very accessible free trial, so there are some limitations to it.
Let me know if there is anything else I can help out with though
Thanks, sadly this makes the service not usable for me. I cannot simply guess what port a service will be running on.
Iām really sorry to hear that. We have heard the request to choose which port the tool runs on from some other as well so we might open it up (made a feature request for it here: https://perceptilabs.canny.io/feature-requests/p/be-able-to-choose-which-port-pl-should-run-on).
I can reach out to you when that happens if you want?
If you only need a specific static port I can also check with the devs if we can make a quick build for you that runs on that port instead of the standard.
Just to be clear though, there are a few services in PL, Iām guessing itās the frontend port you are looking to set? And what port would you want it running on in that case?
Hey Robert
Thanks for the supportive feedback.
Iād rather not set a specific port hard coded, the preferable approach is to have either a commandline variable parameter or a conf file where such a variable parameter can be defined.
The front-end is indeed what iām working with. I donāt know PL well enough at this time. Iād figure that the ability to configure ports of any service so these can be controlled by administrators is of interest to ensure smooth integration.
What you say make total sense, but is unfortunately only possible in the Docker version right
Iāll let you know as soon as we have a solution that will work for you.
I donāt know whether this is of any use to you (maybe you already know this and canāt do it for some other reason) but, since Iām a cant-take-nocando-for-an-answer type of guy, I dug up this:
redirecting ports with iptables in Linux
There are probably better resources, but it illustrates the principleā¦
For windows users, this might do the same.
(I havenāt tried either of these - thought I might try the windows method later, because I was one of the people who previously asked for port flexibility - though I can no longer remember why )
Just in case it helps.
Thanks Julian
Same here. I must admit i was short on time and motivation to dig much deeper.
My observation was the ports are assigned by principle of āfirst available portā for a service to hook up to a socket. This means iād have to guess for the port in iptables.
This is similar to the challenge when working with the reverse-proxy i use on my server to terminate the https connections on various ports. Itās kind of working but not really.
Due to the welcoming and supportive reponse on this forum i now feel both obliged and motivated to look at it again with more attention and time to spend. Hopefully soon.
FYI: itās all Linux here, i only run MS in VMs
Hi again @JLambrecht
I had a word with @robertl and he confirmed (ā99% sureā) that PL has a hard coded port number; so Iām not sure how/why you might have seen different ports in use.
It should be 8080 - in which case the port mapping would be straightforwardā¦
If the port number is not fixed Iām sure Robert would like to know!
Hey Julian,
This is behavior i observed because thereās already a service running on one of the ports in use by PL.
Cheers,
Joris
Hey @JLambrecht
Interesting news!
I investigated on my windows machine by running jupyter lab on a specific port with this
jupyter lab --ip=localhost --port=8080
Then I started PL and, although the server starts up, the port is taken and itās inaccessible⦠as expected. But I did notice that PL is detecting the conflict - and then ignores the error. I think PL will deal with that better in future.
It gets more interesting when we get to Linux. @robertl had one of the devs look into this for you and, though I donāt have the details, he reproduced the unexpected behaviour - but itās not coded that way in PL as the windows test showed.
Suspicion now falls on some imported package that has different Windows/Linux behaviour.
Will keep you apprised as the facts become clearer!
Thanks for this update. The below i share with the best of intentions.
As an engineer i often think it is hard to understand why developers go against the grain of best practices based on lessons learned by system,netwerk,security engineers.
As such i hope some simple configuration file will be made available where IP/DSN:Port can be configured per PL service, maybe even set a port range to make sure it can find an open port.
Docker is a very developer friendly technology, it is also problematic to integrate securely due to the many invisible aspects of a running docker instance and the fact it can run all kinds of tasks and schedulers invisible to the infrastructure team(s) of networking,systems,security. In some place products simply donāt get approved due to this aspect, same for cloud, clouds are developer wet dreams but they are considerable security risks.
The ability to configure services will come to surface anytime in the future when adoption of PL would increase. There is no such thing as a one fit all solution, therefor a configuration file or documented configuration parameters are recommended, as well as the ability to simply run the service on-prem.
Should people being able to run on-prem be a concern then using a process to ensure the integrity of the software and have it call home or even require a license-key-file to run (also for community editions) would be a solution to keep track of the on-prem instances in use.