This post it is target to OBSD user that run Syncthing on it, with a large amount of files.
Unfortunately I don't have images of the problem, but I will try to describe it.
Besides following the port readme-file I also increased kern.maxfiles to 1024000, openfiles-cur and openfiles-max to 102400.
Syncthing configured, nothing fancy, just ssl and user credentials.

pf.conf:

beastie<@>BattleStar-T430 ~ 
§ doas cat /etc/pf.conf  
#	$OpenBSD: pf.conf,v 1.55 2017/12/03 20:40:04 sthen Exp $
#
# See pf.conf(5) and /etc/examples/pf.conf

#block return	# block stateless traffic
#pass		# establish keep-state

# By default, do not permit remote connections to X11
#block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
#block return out log proto {tcp udp} user _pbuild

ext_if = iwn0

# ----- DEFINITIONS -----
 
# Reassemble fragments
set reassemble yes
 
# Return ICMP for dropped packets
set block-policy return
 
# Enable logging on egress interface
set loginterface egress

set limit table-entries 1000000
set ruleset-optimization basic
 
# Allow all on Loopback interface
set skip on lo
 
# Define ICMP message types to let in
icmp_types = "{ 0, 8, 3, 4, 11, 30 }"

table <management> { 192.168.1.176 }
table <networks_sync> { 192.168.0.0/24 192.168.1.0/24 }

# ----- INBOUND RULES -----
# Scrub packets of weirdness
match in all scrub (no-df max-mss 1440)
match out all scrub (no-df max-mss 1440)
 
# Drop urpf-failed packets, add label uRPF
block in quick log from urpf-failed label uRPF
block quick log from <fail2ban>

# Security enhancements
block in from no-route to any
block in from urpf-failed to any
block in quick on $ext_if from any to 255.255.255.255
antispoof for $ext_if
block log all

# Pass in without restriction or rate limiting whitelsited IPs
pass in quick inet proto tcp from <management> to any

# HTTP
pass in quick on $ext_if inet proto tcp from <management> to $ext_if port { 8384 }

# SyncThing
pass in quick on $ext_if inet proto tcp from <networks_sync> to $ext_if port { 22000 }
pass in quick on $ext_if inet proto udp from <networks_sync> to $ext_if port { 21027 }

# ICMP
pass in quick inet proto icmp icmp-type $icmp_types
pass in quick inet6 proto icmp6 

# SSH
pass in quick proto tcp from <management> \
    to port { 22 } \
    flags S/SA modulate state \
    (max-src-conn 5, max-src-conn-rate 5/5, overload <fail2ban> flush global)

# ----- ALL OTHER TRAFFIC TO BE DROPPED -----
 
#block in quick log on egress all
block quick proto tcp from <fail2ban>
 
# ----- OUTBOUND TRAFFIC -----
 
pass out quick on egress proto tcp from any to any modulate state
pass out quick on egress proto udp from any to any keep state
pass out quick on egress proto icmp from any to any keep state
pass out quick on egress proto icmp6 from any to any keep state

(I got this config from this article.)

My Sycthing topology so far is a Latitude5400 running FBSD14.1, a desktop running Void and my Poco phone.
Directories:

  • Documents:
  • - 5,743 files
  • - 661 directories
  • - 17.6GiB
  • Downloads
  • - 230 files
  • - 1 directory
  • - 411MiB
  • Games
  • - 4,890 files
  • - 1,153 directories
  • - 19GiB
  • Music
  • - 1,508 files
  • - 153 directories
  • - 7.63GiB
  • Pictures
  • - 1,242 files
  • - 27 directories
  • - 475MiB
  • Poco
  • - 167 files
  • - 6 directories
  • - 908MiB
  • Templates
  • - 3,216 files
  • - 1,048 directories
  • - 12GiB
  • Videos
  • - 62 files
  • - 4 directories
  • - 948MiB

All directories are configures to pull the smallest files first.

When I start to sync the folders one by one, I start with Pictures (smallest one) and leave the biggest ones for last. As soon as it starts to sync the Template folder (before the Documents one) Syncthing give me errors messages like this one:
Listen (BEP/tcp): Accepting connection: accept tcp 0.0.0.0:22000: accept4: too many open files
And stop sync everything all together.
I also found this, but I'm not sure what that means.
Does anyone else here had a similar problem?

  • rvp replied to this.

    BaronBS I also found this, but I'm not sure what that means.

    That's the system-wide open files limit. What's the per-process limit?

    $ ulimit -n
    512
    $ ulimit -n 1024
    $ ulimit -n
    1024
    $

      rvp

      beastie<@>BattleStar-T430 ~ 
      § ulimit -n 
      8192
      • rvp replied to this.

        BaronBS Right, that's the soft-limit. You can raise it up to the hard-limit (ulimit -Hn). But, this is way smaller than what you said you configured (102400).

        If you added a different login class for syncthing, then check the limits for that using:

        getcap -f /etc/login.conf syncthing | tr '\t' '\n' | fgrep openfiles

        You can get a rough idea of how many files syncthing will open by adding up the no. of files and dirs. in the tree you want syncthing to operate on.

        You can see how many files are open on the system by running sysctl -n kern.nfiles

        I presume you've already read /usr/local/share/doc/pkg-readmes/syncthing for the other alternative to using kqueue?

        EDIT: What's the output of:

        sysctl -n kern.maxfiles

          rvp Right, that's the soft-limit. You can raise it up to the hard-limit (ulimit -Hn). But, this is way smaller than what you said you configured (102400).

          beastie<@>BattleStar-T430 ~ 
          § ulimit -Hn
          16384
          beastie<@>BattleStar-T430 ~ 
          § sysctl -n kern.maxfiles                                         
          102400
          beastie<@>BattleStar-T430 ~ 
          § grep staff -A11 /etc/login.conf                                                                                         
          staff:\
          	:datasize-cur=infinity:\
          	:datasize-max=infinity:\
          	:maxproc-max=1024:\
          	:Maxproc-cur=512:\
          	:openfiles-cur=102400:\
          	:openfiles-max=102400:\
          	:stacksize-cur=32M:\
          	:ignorenologin:\
          	:requirehome@:\
          	:tc=default:

          rvp If you added a different login class for syncthing, then check the limits for that using:
          getcap -f /etc/login.conf syncthing | tr '\t' '\n' | fgrep openfiles

          I didn't, I start it by using a tmux pane with:

          tmux new-session -d -s Daemons -n Syncthing '/usr/local/bin/syncthing --no-browser'

          rvp I presume you've already read /usr/local/share/doc/pkg-readmes/syncthing for the other alternative to using kqueue?

          I did, but I didn't understood what you mean by "alternative".

          • rvp replied to this.
          • Jay likes this.

            BaronBS I didn't, I start it by using a tmux pane with:

            OK, so, as long as you have kern.maxfiles=102400 (or higher) in /etc/sysctl.conf and your user has a login class of staff (run vipw, then check the 5th field), then you should have no issues.

            Oh, and make sure to run cap_mkdb /etc/login.conf if you have a /etc/login.conf.db in place. Otherwise, changes made to /etc/login.conf won't "take".

            BaronBS I did, but I didn't understood what you mean by "alternative".

            I don't use syncthing, but, quoting from the readme:

            Another option is to turn off the file watcher and use only periodic scans.
            This will result in much reduced file descriptor usage at the cost of a
            (configurable) latency. See "watch for changes" and "full rescan interval" in
            the "advanced" tab in a folder's settings (on the web UI).

            Should be easy to find...

              rvp Sorry about the delay mate.
              So, my T430 with OBSD started to reboot randomly since I started this thread. Since it is a old notebook, I think it can be a hardware problem. To test it I backup my OBSD install and I'm gonna use FBSD for a while on it. If the "reboots" stops, it was probably OBSD fault (or mine, if it was some config error form my part). But if doesn't stops, it will indicate that it is a hardware problem.
              After I get this answer I will decide if I will comeback to OBSD or not.
              Anyway, thank you for your time and patience.