So far my experience with Nextcloud has been that it is a pain in the arse to install, and once it’s installed is slow as anything. Literally couldn’t run it on my pi 3b, now got it up and running pretty nicely on a NUC but it’s still not great. Have caching set up.
I have the notes app installed on my android phone and I can never used rich text editing because it gives timeout error.
This shouldn’t be this complicated. All I want is to de-Google my documents and notes, and self-host my kanban. I don’t really need the rest though it’s nice to have the options.
Do people use alternatives? Am I doing something completely wrong? I set it up using nginx which I know is not supported, but the alternative using Docker AIO didn’t allow me to use custom port easily.
I seriously suggest you give Nextcloud another go, this time under Docker. Very simple to do.
Save the following in a new folder as docker-compose.yml
version: '3' volumes: db: services: nextcloud-app: image: nextcloud container_name: nextcloud-app restart: always volumes: - ./data:/var/www/html environment: - MYSQL_PASSWORD=changeme - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud - MYSQL_HOST=nextcloud-db ports: - "80:80" links: - nextcloud-db nextcloud-db: image: mariadb container_name: nextcloud-db restart: always command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW volumes: - db:/var/lib/mysql environment: - MYSQL_ROOT_PASSWORD=changeme - MYSQL_PASSWORD=changeme - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud
run this command in the folder -
docker-compose up -d
open http://localhost
Is the mariadb a default part of nextcloud? I’ve seen posts saying to use a separate db so things can be backed up easier, so I was wondering if that’s how you have it set up above.
In this setup the DB is not part of Nextcloud. Both are running in separate services aka containers, which can be administrated independently from each other.
No you can use other databases. It is separate here
I suspect nextcloud having performance issues with slow Disk IO. With rootless containers I had a much worse performance than rootfull. Also using MySQL Backend instead of SQLite did speedup the performance.
Nevertheless I have the same problems with nextcloud as you stated. Pretty much not as usable as I thought.
It’s on a SATA drive, albeit hard drive not ssd and I’m using mariadb. Everybody seems to suggest I need a beefier server but as a developer myself, the functionality of the software doesn’t seem to warrant anything more powerful.
Software config optimizations help a little bit but my biggest improvement was moving the DB to SSD. Spinning disks are great for capacity but not for DB performance. Random I/O is a big factor for them and those drives drop in performance so fast for that type of I/O due to physically spinning media.
I started out using Owncloud and later switched to Nextcloud once that fork was stable. For all my uses it has always needed beefy hardware to run well but I definitely have way more junk files in synced folders than I should & rarely clean things up.
Try moving the database at least on a SSD, and enable Redis caching.
How much memory? I think nextcloud wants around 8gb to run happily (ymmv). I’ve tried it with smaller sizes and ran into issues.
Yes I have 8gb of ram, but it seems insane that it needs that much considering what it is doing.
I have had it run with less, but it does a lot of image processing when you upload photos that I think needs more or something. I’ve never really taken the time to dig into what it’s doing. Could be some aggressive caching as well.
I think nextcloud wants around 8gb to run happily (ymmv).
As a developer myself, where did it go wrong?
I’m not sure it’s going “wrong”. It depends on the scenarios it’s designed for. If they intend it to be run on servers (there is no class of raspberry pi that is a server) then you design it to take advantage of those resources.
But it’s designed for selfhosting and consumes all that without doing much.
I didn’t mean to imply it needed server hardware. You can absolutely self-host Nextcloud. But RPIs are the absolutely lowest-end of hardware for serving duties. They’re great little systems but they’re designed to be cheap, not performant.
I self-host currently on a VM running on a 12 year old x86 system with 8GB RAM and with the Nextcloud file storage going over a 1Gbps NFS mount. Not exactly a high-end setup. And it performs just fine. I was previously running on a AWS EC2 instance where I noticed occasional issues running on a T4g.SMALL. (only 2GB RAM). I had to bump up to a MEDIUM at some point though.
It worked with less RAM pretty fine for a long time. But as I increased usage it would have issues occasionally. I think with all the images I have it was doing lots of processing for thumbnails and the like. I never really dove into it to see what exactly was going on though…
But still - a moderately old desktop system with 4-8G of RAM is just fine for “self-hosting”.
EDIT: I should add - I’m also hosting MariaDB on the same server - also with its data stored on an NFS share.
Just want to say that I’ve been there. There was a time my Nextcloud install was incredibly slow. Fortunately (or unfortunately?), it is featureful enough and widely supported that once you figure this issue out, it is a nice service to keep running.
For me, adding Redis was essential. It doesn’t really make sense to me why (nothing I do on Nextcloud is intensive or data heavy) but it has greatly improved the performance of my app.
My entire setup is a containerized Nextcloud, Nextcloud Cron, MariaDB (if I knew Postgres was an option, I would’ve chosen that), and Redis:
version: '2' services: nextcloud: container_name: nextcloud image: nextcloud:27-apache restart: unless-stopped environment: - MYSQL_PASSWORD=nextcloud - MYSQL_HOST=db - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud labels: - 'public-service=true' - 'traefik.enable=true' - 'traefik.http.routers.cloud.rule=Host(`nextcloud.some.domain`)' - 'traefik.http.routers.cloud.tls=true' - 'traefik.http.services.cloud.loadbalancer.server.port=80' volumes: - /some/data/dir/nextcloud/data:/var/www/html - /some/external/dir:/wew:ro nextcloud-cron: image: nextcloud:27-apache restart: unless-stopped command: [/cron.sh] environment: - MYSQL_HOST=db - MYSQL_DATABASE=nextcloud - MYSQL_USER=nextcloud - MYSQL_PASSWORD=nextcloud volumes: - /some/data/dir/nextcloud/data:/var/www/html - /some/external/dir/:/wew:ro db: image: mariadb:10.4 restart: unless-stopped environment: MYSQL_DATABASE: nextcloud MYSQL_USER: nextcloud MYSQL_ROOT_PASSWORD: nextcloud volumes: - /some/data/dir/nextcloud/db:/var/lib/mysql mysqldump: image: mariadb:10.4 depends_on: [db] # restart: never # cronjob labels: - 'cron.schedule=0 0 8 * * ?' entrypoint: [mysqldump, -h, db, -u, nextcloud, -pnextcloud, --all-databases, -r, /out/nextcloud.sql] user: root volumes: - /some/data/dir/nextcloud/db-dump:/out redis: image: redis restart: unless-stopped
For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.
https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html
I’ll try this next time I need to restore the DB from backup, cheers!
Yes, redis should be part of the standard install. Not doing it is just setting yourself up for disappointment.
Also I believe postgress has better performance than mariadb so no reason not to use that if you are setting it up from scratch.
Pg has significantly better performance in a smaller self hosted environment. Notably because you’re doing a balance of reading and writing, or mostly writing since data changes regularly. For large scale operations where reading data is the primary use, MariaDB/MySQL is faster.
How is NC using redis? I can’t see any links from the NC container
I configured it in config.php directly, probably following https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html#id2
'memcache.local' => '\\OC\\Memcache\\APCu', 'memcache.distributed' => '\\OC\\Memcache\\Redis', 'memcache.locking' => '\\OC\\Memcache\\Redis', 'redis' => array ( 'host' => 'redis', 'port' => 6379, ),
Thanks!
Can I ask why the separate NC container for cron? Also, I presume the mysqldump container is for easy db backups?
The separate cron container made the most sense to me. Other variants “work” but imo are mostly workarounds to avoid setting up a real cronjob. Beyond this I have no real reason, nor can I vouch that is is more or less performant than others.
Yes, the mysqldump container is for easier restores. It’s much easier to restore from a .sql file than a raw data dir that was copied while the DB was running ;) (speaking from experience…)
Nextcloud is not easy to setup, that´s right, but it is not this complicated. Use Postgres as database, it is faster than MySQL. Install Redis as Cache and configure PHP Cache too. This will speedup the most. I use nextcloud installed directly on the host, no docker. Another small guide is here for Postgres and Apache
For speed Seafile absolutely smokes Nextcloud.
If you create an account they’ll give a pro license (limited to 3 users) for free. Or you can stick with the always free community edition which works great too.
I use Syncthing for my files, I don’t need a Web ui so it’s great and handles huge directories easily.
I have nextcloud running on docker on a Raspberry Pi 4 and I’d say the performance is comparable to Onedrive web interface. If you’re getting timeouts then something must be wrong with the setup, not the machine it’s running on. Using Postgres instead of MySQL or using an SSD instead of HDD is not going to help your issue.
I’m my experience, nextcloud is quite I/O bound. The performance of your storage device will greatly affect nextcloud performance. But if you’re already using SSD and the performance still bad, maybe there are other issues with your setup.
For me, speed isn’t the only issue. Everything about nc seems to be cobbled together in the most inconvenient way possible. Updates have always been hit or miss for me - and if you choose to use dockerized versions, you might as well shoot yourself, everything is very slow, even as the only use having it running on a quite capable machine it feels sluggish (not slow, but uncomfortably delayed).
It’s a glorified Dropbox clone, why do I need anything more than a rpi1 for that?
It’s a PHP app inherited from owncloud so at some point you’ll just have to accept it won’t be as performant as apps written in compiled languages (and it also inherit owncloud’s quirks and other annoyance related to its php-based deployment). But this weakness is actually a strength too, being a php app makes extending its functionality very easy, resulting in a lot of community-developed plugins. Basically a trade-off between performance and features + community plugins availability. If you value performance more and don’t need anything beyond file sharing feature, there are plenty of other options right now.
I was thinking about this a couple of weeks ago. I’m running nextcloud in a VM - php recompiler, redis, mariadb, plenty of RAM (4GB). I’m spreading about 80GB of data across a few users, but it’s dog slow on mass upload. If I wanted to upload 1000 images from my phone, it would hours. I moved those photos to my laptop, which was fast, then tried uploading them to nextcloud via the Ubuntu desktop sync app, and it still took almost 2 hours. Nextcloud is backed by RAID6 storage and benchmarks suggest it’s over 300MB/sec write.
I think it has something to do with file transfer overhead (start stop) similar to FTP impacting WebDAV, but that’s pure speculation on my part.
I was wondering what it would take to rewrite Nextcloud core functionality in Java and use some kind of different interface than WebDAV, but I’ve got a lot of irons in the fire at the moment.
2 hours for uploading 1000 photos means it took 7.2s per photo upload. This is very slow, not sure if it’s just webdav overhead. Does uploading large files also slow (e.g. way below your raw network speed)?
Could be related to this if you’re still on an older version: https://github.com/nextcloud/server/issues/33453
LAN is gigabit, and I can sustain Gb speeds in regular file transfers via mounted nfs shares. There isn’t much difference over Wi-Fi (ubiquiti APs). Also running the latest Nc, 27.0.2.1 or whatever it is.
I understand the history, and that may have been an excuse 6 months after the fork, but think about how long nc exists now. And how many features (like migrations) are apparently simply not worked on.
NC is a great example of the current trend of “fuck good design, just throw more silicon at the problem”.
Nextcloud is hard to install in manual way (even sometimes with Docker). As far I know, both Snap and Yunohost versions of Nextcloud are solid. I used Snap version on the cheapest Linode VPS, and it worked fine, especially when I doubled the SWAP to 1 GB. Now I use Yunohost version and I have only good time with it. It is super stable, fast and reliable. I used Nextcloud_ynh on HP 800 Mini G3 with i5-6500t and now on Asrock Mini PC with Ryzen 7 5700g. It is working just great.
If you don’t want to use Nextcloud, you ca install Vikunja for kanban and tasks. For notes Hedgedoc can be great.
+1 for Yunohost. I’ve never yet been able to figure out docker. Yunohost has kept me happy for years.
I have had issues with nextcloud breaking randomly every time ive tried it. The thing I wanted the most was the caldav/webdav to integrate with Gnome and Davx5 (and finally kill google calender), and to get that I tried Owncloud instead. The UI leaves a lot to be desired but if you only use the *dav functionality it works like a charm. It also has a mobile app for syncing and several extensions but I havent delved into them.
If you only need cardav/caldav functionality, Radicale provides just that. Can be deployed as a container and works flawlessly with DAVx5.
I run Nextcloud, its responsive and has all my stuff in it. Notes, Calendar, Contacts, Kanboard, Photos, RSS reader, others. You do need look at the setup, how many PHP processes are you running, how much memory does MySQL use.
My current setup is a a PHP vm, 6 cores and 8GB of memory and a MySQL vm that is 2 cores and 8GB memory. But I work for a SaaS provider and thats now we carve up our systems, a vm/instance for 1 job.
how many PHP processes
Curious: where do I set this number?
The way I sorted it was to run nextcloud for a week, then run
ps aux
on the host and see what the memory use of a php process is. The 5th column is the memory use of a process, divide the number into the amount of memory you want PHP to use. The number fromps
is bytes, so you will need to use some maths to make it all fitin Debian running PHP-FPM in
/etc/php/{{ php_version }}/fpm/pool.d/www.conf
edit or add the below lines with the settings you needpm = dynamic pm.max_children = 8 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 3
Also MySQL has some options you can change to use more or less memory, this handy tool MySQLTuner is your best way to get the options
Pydio and Seafile are alternatives I’ve tried. Pydio was pretty fast too. I agree with you on Nextcloud, I want to like it but I inevitably start having issues and it’s slow even after tuning. It just tries to do too much and shouldn’t be that complex to spin up a file server.
To be honest I’m not interested in the file sharing side of nextcloud as I use Syncthing, I’m more interested in the utilities (eg notes, kanban) and the office capabilities. I want to replace gsuite
Nginx is supported and a good choice. What database are you using? I’d recommend MariaDB.
Nginx alone will not speed up Nextcloud. Use Postgres instead of MySQL and configure Redis and PHP Cache. This will speed up most. And Nginx is not officially support, see here
The documentation says nginx is not officially supported: https://docs.nextcloud.com/server/19/admin_manual/installation/nginx.html
I am using mariadb
Ah right, sorry; the company doesn’t support it directly but the docs provide an example. To me as tinkerer that was solid enough 😂😅
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters AP WiFi Access Point HTTP Hypertext Transfer Protocol, the Web NUC Next Unit of Computing brand of Intel small computers RPi Raspberry Pi brand of SBC SATA Serial AT Attachment interface for mass storage SBC Single-Board Computer SSD Solid State Drive mass storage VPS Virtual Private Server (opposed to shared hosting) nginx Popular HTTP server
7 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.
[Thread #125 for this sub, first seen 10th Sep 2023, 09:45] [FAQ] [Full list] [Contact] [Source code]
I guess you are doing something very wrong if you have such performance trouble all the time.
The Pi (up to 4) is known for bad disk I/O. Look into this area first.
I am running my nc on a weak old low power deskop CPU (and with real SATA harddisks) and only when I ask for long running jobs (like, create the previews and icons for 200 new photos) I can watch any slow responses at all.
I’m not using a Pi, I’m using a j4125 based mini computer, which has made a big difference but the performance just still is not good enough.
That is better than my NUC and I have no performance issues
I’ve also had nothing but troubles with NC. I tried the AIO option and while it was easier to setup, it was still slow on both a VPS and my local unRAID server. I find that if you’re simply using it as a sync point for apps instead of regularly using the web portal, it’s ok. Seafile is insanely fast. But it stores the data in chunks on the server which some do not like as it can complicate backups. I work around that by just backing up from one of my always on clients since the seadrive client mounts the chunks into usable format. That works great.
Then again, NC is way more app than I need.