Running PL as a service

Hey,

To unload my laptop i seek to run perceptilabs as a service, which has proven cumbersome.

are there guides on how to do this ?

Hi @JLambrecht

Always nice to see another new face here! I’ve been using PL for most of this year now - maybe I can assist…

Can you clarify what you would like to achieve to ā€œunloadā€ your laptop and run PL as a service? What’s your setup?

1 Like

hey Julian,

Thanks for the warm welcome.

My laptop is typically running multiple browser with many tabs open en multiple applications where i pivot through to work on various topics. My work has me running a handful of services which i found overload the CPU to 100% on all cores for considerable time when running updates, indexing etc.

To this end i’d like to run PL on a separate VM on the local server where i’ve migrated multiple services to already.

When starting PL i’d like to be able to pin the port it is running on for example, now it just tries and opens the first port it can open. I’d also need to better understand what services are running so i can configured the reverse proxy to handle it cleanly.

br,

Joris

Hi Joris

Ah, I see.

Hopefully @robertl will drop by to provide some info on PerceptiLabs docker images, that could be a nice simple way to run it on your server. Alternatively, you could setup up your own VM with python etc. The only thing is that training will be CPU only, but if the local server is on all the time and it doesn’t affect other workloads that’s probably fine - I don’t know of any CUDA access in VMs (yet).

NB Is that a Windows or a Linux server?

What I don’t know is how best to access the PL server running in your VM… I guess you’d just like to be able to see it in your browser, but I suppose you could also use a remote desktop to the VM and use a browser within it…

I think PL uses a fixed port… hence the potential for conflict sometimes.

So, despite my fine intent I don’t seem to have been very helpful at all - I expect I’ll learn as much form Robert’s input as you will!

Thansk Julian

i don’t do Docker though, i loath it as it introduces a lot of unknowns and unpredictables.

It is a Linux server, i’ve installed using python so far. I think i have it working once i get how to configure the startup parameters, i could not find this documented.

What i do is run a reverse proxy setup so it becomes: browser > reverse-proxy > localhost of VM

the challange here is to run PL in the python virtual environmen/sandbox and activate it so it runs on a number of fixed ports.

wrt the CPU/GPU, i own a powerful AMD GPU and an old Nvidia GPU, i don’t get why developers insist on making code GPU specific these days, there are standards out there and tools to migrate stuff

afaik PL will start on one port, restart on a port+1 if the port was taken or something similar, it is what i observed

When it comes to PL i’ll have plenty of questions. My personal goal is to use it for text data analysis.

I’ll definitely be learning from you re setups

I didn’t know there was any AMD support for TF… but suspected you knew something I didn’t: indeed there is now ROCm from AMD apparently. (medium article here).

And thanks for the info about PL port usage - that must be a recent change I was unaware of. Thanks for sharing that.

But re modelling… I have diverse interests too… I wanted to see whether I could get a nn to write like me :wink: (I have 320k word corpus in single style to train on) and also wanted to mine that text for other interesting info (structure, sentiment, …)

I look forward to hearing more about your objectives sometime.

Thanks.

In summary my objectives are text file analytics based on pre-defined keywords for which i hope a NN can learn to see the patterns i see or at least hint at them with high confidence.

1 Like

Hi @JLambrecht,
Welcome to the forum!

As @JulianSMoore mentioned, we have a Docker version which is the standard way of running PL on a remote server, but if that’s not great for you we can look at some other alternatives as well (including our Cloud version that’s coming up towards the end of this year).
If you are interested, we can hop on a call sometime this week and talk them through (and I would love to hear more about your usecase as well) :slight_smile:

Best,
Robert

Hey Robert

Thanks, not Docker not Cloud for me.

It’s not much i’m asking, i’m asking to learn how to configure a fixed port for perceptilabs to run on.

At the moment, we unfortunately don’t have any way to manually set the ports in the local python version.
The python version is more of a very accessible free trial, so there are some limitations to it.
Let me know if there is anything else I can help out with though :slight_smile:

Thanks, sadly this makes the service not usable for me. I cannot simply guess what port a service will be running on.

I’m really sorry to hear that. We have heard the request to choose which port the tool runs on from some other as well so we might open it up (made a feature request for it here: https://perceptilabs.canny.io/feature-requests/p/be-able-to-choose-which-port-pl-should-run-on).
I can reach out to you when that happens if you want?

If you only need a specific static port I can also check with the devs if we can make a quick build for you that runs on that port instead of the standard.
Just to be clear though, there are a few services in PL, I’m guessing it’s the frontend port you are looking to set? And what port would you want it running on in that case?

Hey Robert

Thanks for the supportive feedback.

I’d rather not set a specific port hard coded, the preferable approach is to have either a commandline variable parameter or a conf file where such a variable parameter can be defined.

The front-end is indeed what i’m working with. I don’t know PL well enough at this time. I’d figure that the ability to configure ports of any service so these can be controlled by administrators is of interest to ensure smooth integration.

What you say make total sense, but is unfortunately only possible in the Docker version right
I’ll let you know as soon as we have a solution that will work for you.

I don’t know whether this is of any use to you (maybe you already know this and can’t do it for some other reason) but, since I’m a cant-take-nocando-for-an-answer type of guy, I dug up this:
redirecting ports with iptables in Linux

There are probably better resources, but it illustrates the principle…

For windows users, this might do the same.

(I haven’t tried either of these - thought I might try the windows method later, because I was one of the people who previously asked for port flexibility - though I can no longer remember why :man_facepalming:)

Just in case it helps.

Thanks Julian

Same here. I must admit i was short on time and motivation to dig much deeper.

My observation was the ports are assigned by principle of ā€˜first available port’ for a service to hook up to a socket. This means i’d have to guess for the port in iptables.

This is similar to the challenge when working with the reverse-proxy i use on my server to terminate the https connections on various ports. It’s kind of working but not really.

Due to the welcoming and supportive reponse on this forum i now feel both obliged and motivated to look at it again with more attention and time to spend. Hopefully soon.

FYI: it’s all Linux here, i only run MS in VMs

Hi again @JLambrecht

I had a word with @robertl and he confirmed (ā€œ99% sureā€) that PL has a hard coded port number; so I’m not sure how/why you might have seen different ports in use.

It should be 8080 - in which case the port mapping would be straightforward…

If the port number is not fixed I’m sure Robert would like to know!

Hey Julian,

This is behavior i observed because there’s already a service running on one of the ports in use by PL.

Cheers,

Joris

Hey @JLambrecht

Interesting news!

I investigated on my windows machine by running jupyter lab on a specific port with this

jupyter lab --ip=localhost --port=8080

Then I started PL and, although the server starts up, the port is taken and it’s inaccessible… as expected. But I did notice that PL is detecting the conflict - and then ignores the error. I think PL will deal with that better in future.

It gets more interesting when we get to Linux. @robertl had one of the devs look into this for you and, though I don’t have the details, he reproduced the unexpected behaviour - but it’s not coded that way in PL as the windows test showed.

Suspicion now falls on some imported package that has different Windows/Linux behaviour.

Will keep you apprised as the facts become clearer!

Thanks for this update. The below i share with the best of intentions.

As an engineer i often think it is hard to understand why developers go against the grain of best practices based on lessons learned by system,netwerk,security engineers.

As such i hope some simple configuration file will be made available where IP/DSN:Port can be configured per PL service, maybe even set a port range to make sure it can find an open port.

Docker is a very developer friendly technology, it is also problematic to integrate securely due to the many invisible aspects of a running docker instance and the fact it can run all kinds of tasks and schedulers invisible to the infrastructure team(s) of networking,systems,security. In some place products simply don’t get approved due to this aspect, same for cloud, clouds are developer wet dreams but they are considerable security risks.

The ability to configure services will come to surface anytime in the future when adoption of PL would increase. There is no such thing as a one fit all solution, therefor a configuration file or documented configuration parameters are recommended, as well as the ability to simply run the service on-prem.

Should people being able to run on-prem be a concern then using a process to ensure the integrity of the software and have it call home or even require a license-key-file to run (also for community editions) would be a solution to keep track of the on-prem instances in use.