Compare commits

...

36 Commits

Author SHA1 Message Date
copilot-swe-agent[bot] 543d5b11db Add explicit lifetime annotations to fix openSUSE warnings
Co-authored-by: nitnelave <796633+nitnelave@users.noreply.github.com>
2025-11-16 10:59:20 +00:00
copilot-swe-agent[bot] ac0e0780e9 Initial plan 2025-11-16 10:30:36 +00:00
Tobias Jungel ab4389fc5f fix(bootstrap): set shopt nullglob
Set the `nullglob` option in the bootstrap script to handle cases where
no files match a glob pattern.

This prevents the following error when the folder exists without json
files:

```
/bootstrap/group-configs/*.json: jq: error: Could not open file /bootstrap/group-configs/*.json: No such file or directory
```
2025-11-09 22:35:50 +01:00
Tobias Jungel ddcbe383ab docs: Rename 'mail_alias' to 'mail-alias' in example config (#1346)
The example included an invalid character `_` for the attribute `name`

This resulted in:

```
Cannot create attribute with invalid name. Valid characters: a-z, A-Z, 0-9, and dash (-). Invalid chars found: _
```

This fixes the example by using a `-`.
2025-11-09 12:07:44 +01:00
Sören eee42502f3 docs: fix example_configs path
from ./example_configs to ../example_configs
2025-10-21 15:42:06 +02:00
thchha 660301eb5f example_configs: add initial gogs.md documentation
Gogs is the origin for common git forges so we add a documentation which
may be beneficial for other use cases where lldap should be used with.
It appears to be in mantenance mode - the current example may have to be
extended in the future.

We adapt the official documentation example configuration to integrate
lldap with the more elaborated example.
The reader may also be interested in a more simple example at
[upstream](https://github.com/gogs/gogs/blob/main/conf/auth.d/ldap_simple_auth.conf.example).
2025-10-21 00:07:46 +02:00
Nassim Bounouas 73f071ce89 docs: lldap password in docker install corrected 2025-10-18 12:44:59 +02:00
Copilot 28ef6e0c56 example_configs: mailserver,
fix outdated roundcube mounts and filters
2025-10-18 12:20:29 +02:00
Shawn Wilsher a32c8baa25 misc: improve vscode devcontainer experience
This change enables a better IDE experience in vscode by doing two
things:
1) Enables the rust-analyzer, which enables a bunch of features in
   vscode
2) Installs the needed deps for `cargo fmt` to work.
2025-10-14 11:54:48 +02:00
Copilot bf5b76269f server: Refactor config_overrides to use Option::inspect
To reduce cyclomatic complexity.
2025-10-12 20:14:20 +02:00
Hendrik Sievers c09e5c451c example_configs: update SSSD guide 2025-10-11 08:39:25 +02:00
Valentin Tolmer 1382c67de9 server: Extract configuration utilities 2025-10-10 23:28:35 +02:00
Copilot 0f8f9e1244 server: split up update_user_with_transaction 2025-10-10 09:01:52 +02:00
Webysther Sperandio 9a83e68667 app: Set a key for user/group creation buttons
That prevents them from jumping around when changing pages.
2025-10-10 00:28:11 +02:00
Copilot 3f9880ec11 server: Move LDAP search tests to their respective implementation files
Move user and group tests to their respective implementation files

User tests → core/user.rs:
- test_search_regular_user
- test_search_readonly_user
- test_search_member_of
- test_search_user_as_scope
- test_search_users
- test_pwd_changed_time_format

Group tests → core/group.rs:
- test_search_groups
- test_search_groups_by_groupid
- test_search_groups_filter
- test_search_groups_filter_2
- test_search_groups_filter_3
- test_search_group_as_scope

Tests remain in search.rs:
- DSE/schema tests
- General search logic tests
- Filter tests
- Error handling tests
- OU search tests
- Mixed user/group tests
2025-10-10 00:21:32 +02:00
Valentin Tolmer 94007aee58 readme: Add a link to the configuration guide's readme 2025-10-04 23:24:46 +02:00
Copilot 9e9d8e2ab5 graphql: split query.rs and mutation.rs into modular structures (#1311) 2025-10-04 23:09:36 +02:00
Lucas Sylvester 18edd4eb7d example_configs: update portainer group membership and filter attributes
The current descriptions is wrong, and will make portainer try to assign "group" to be a member of "group" instead of the assign the "user" to be a part of "group"
2025-10-04 22:16:00 +02:00
Jonas Resch 3cdf2241ea example_configs: Improve bootstrap.sh and documentation for use with Kubernetes (#1245) 2025-09-28 14:02:06 +02:00
thchha 9021066507 example_configs: Add configuration example for Open WebUI
This documents a working (LDAPS) configuration for using lldap in Open WebUI.

Environment Variables where directly taken from the logs.
The names of the GUI variables are taken from the UI.
Version v0.6.26.

The two configuration options are then put in a table and a small
elaboration + example values are provided.

Other then additionally mounting the ca chain into the container (with appropriate rights) there were not additional steps required.
The ownership of the ca chain will get changed to `chown 501:`.
2025-09-28 13:55:29 +02:00
Copilot fe063272bf chore: add Nix flake-based development environment
Co-authored-by: Kumpelinus <kumpelinus@jat.de>

- Add Nix flake and lockfile for reproducible development environments
- Document Nix-based setup in `docs/nix-development.md`
- Add `.envrc` for direnv integration and update `.gitignore` for Nix/direnv artifacts
- Reference Nix setup in CONTRIBUTING.md
2025-09-28 13:51:41 +02:00
RealSpinelle 59dee0115d example_configs: add missing fields to authentik example 2025-09-24 16:03:56 +02:00
Valentin Tolmer 622274cb1a chore: fix codecov config 2025-09-22 09:34:37 +02:00
Valentin Tolmer 4bad3a9e69 chore: reduce codecov verbosity 2025-09-22 01:01:00 +02:00
Copilot 84fb9b0fd2 Fix pwdChangedTime format to use LDAP GeneralizedTime instead of RFC3339 (#1300)
When querying for pwdChangedTime, the timestamp is returned in RFC3339 format instead of the expected LDAP GeneralizedTime format (YYYYMMDDHHMMSSZ). This causes issues when LLDAP is used with systems like Keycloak that expect proper LDAP timestamp formatting.
2025-09-22 00:42:51 +02:00
Valentin Tolmer 8a803bfb11 ldap: normalize base DN in LdapInfo, reduce memory usage
By making it a &'static, we can have a single allocation for all the threads/async contexts.

This also normalizes the whitespace from the user input; a trailing \n can cause weird issues with clients
2025-09-17 01:03:19 +02:00
Valentin Tolmer f7fe0c6ea0 ldap: fix swapped filter conditions 2025-09-16 14:58:46 +02:00
Valentin Tolmer 8f04843466 ldap: Simplify boolean expressions derived from filters 2025-09-16 01:58:41 +02:00
Hobbabobba 400beafb29 example_config: Add pocket-id 2025-09-16 01:40:08 +02:00
dependabot[bot] 963e58bf1a build(deps): bump tracing-subscriber from 0.3.18 to 0.3.20
Bumps [tracing-subscriber](https://github.com/tokio-rs/tracing) from 0.3.18 to 0.3.20.
- [Release notes](https://github.com/tokio-rs/tracing/releases)
- [Commits](https://github.com/tokio-rs/tracing/compare/tracing-subscriber-0.3.18...tracing-subscriber-0.3.20)

---
updated-dependencies:
- dependency-name: tracing-subscriber
  dependency-version: 0.3.20
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-09-16 01:10:06 +02:00
Kumpelinus 176c49c78d chore: upgrade Rust toolchain to 1.89 and modernize code with let-chains 2025-09-16 00:48:16 +02:00
Copilot 3d5542996f chore: Add CodeRabbit configuration to reduce agent verbosity 2025-09-16 00:12:45 +02:00
psentee 4590463cdf auth: serialize exp and iat claims as NumericDate to comply with RFC7519 (#1289)
Add `jti` claim to the JWT to avoid hashing collisions
2025-09-15 17:24:59 +02:00
lordratner 85ce481e32 Update opnsense.md
Added instruction for using/not using Constraint Groups. This option is selected by default and the current instructions do not address it, but if it is left on and the Authentication Containers are not updated, the group sync will fail.
2025-09-14 15:53:05 +02:00
Valentin Tolmer f64f8625f1 Add username to password recovey emails 2025-09-14 15:44:37 +02:00
Alexandre Foley c68f9e7cab example_configs: fix the quadlet readme
Several "podman" command should have been "systemctl" from the start.
2025-09-04 22:23:12 +02:00
73 changed files with 3519 additions and 2911 deletions
+46
View File
@@ -0,0 +1,46 @@
# docs: https://docs.coderabbit.ai/reference/yaml-template for full configuration options
tone_instructions: "Be concise"
reviews:
profile: "chill"
high_level_summary: false
review_status: false
commit_status: false
collapse_walkthrough: true
changed_files_summary: false
sequence_diagrams: false
estimate_code_review_effort: false
assess_linked_issues: false
related_issues: false
related_prs: false
suggested_labels: false
suggested_reviewers: false
poem: false
auto_review:
enabled: true
auto_incremental_review: true
finishing_touches:
docstrings:
enabled: false
unit_tests:
enabled: false
pre_merge_checks:
docstrings:
mode: "off"
title:
mode: "off"
description:
mode: "off"
issue_assessment:
mode: "off"
chat:
art: false
auto_reply: false
knowledge_base:
web_search:
enabled: true
code_guidelines:
enabled: false
+1 -1
View File
@@ -1,4 +1,4 @@
FROM rust:1.85
FROM rust:1.89
ARG USERNAME=lldapdev
# We need to keep the user as 1001 to match the GitHub runner's UID.
+20 -2
View File
@@ -1,8 +1,26 @@
{
"name": "LLDAP dev",
"build": { "dockerfile": "Dockerfile" },
"build": {
"dockerfile": "Dockerfile"
},
"customizations": {
"vscode": {
"extensions": [
"rust-lang.rust-analyzer"
],
"settings": {
"rust-analyzer.linkedProjects": [
"./Cargo.toml"
]
}
}
},
"features": {
"ghcr.io/devcontainers/features/rust:1": {}
},
"forwardPorts": [
3890,
17170
]
],
"remoteUser": "lldapdev"
}
+1
View File
@@ -0,0 +1 @@
use flake
+5 -8
View File
@@ -1,19 +1,16 @@
codecov:
require_ci_to_pass: yes
comment:
layout: "header,diff,files"
require_changes: true
require_base: true
require_head: true
layout: "condensed_header, diff, condensed_files"
hide_project_coverage: true
require_changes: "coverage_drop"
coverage:
range: "70...100"
status:
project:
default:
target: "75%"
threshold: "0.1%"
removed_code_behavior: adjust_base
github_checks:
annotations: true
threshold: 5
ignore:
- "app"
- "docs"
+1 -1
View File
@@ -1,5 +1,5 @@
# Keep tracking base image
FROM rust:1.85-slim-bookworm
FROM rust:1.89-slim-bookworm
# Set needed env path
ENV PATH="/opt/armv7l-linux-musleabihf-cross/:/opt/armv7l-linux-musleabihf-cross/bin/:/opt/aarch64-linux-musl-cross/:/opt/aarch64-linux-musl-cross/bin/:/opt/x86_64-linux-musl-cross/:/opt/x86_64-linux-musl-cross/bin/:$PATH"
+13 -3
View File
@@ -24,7 +24,7 @@ on:
env:
CARGO_TERM_COLOR: always
MSRV: "1.89.0"
### CI Docs
@@ -88,6 +88,12 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5.0.0
- name: Install Rust
id: toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: "${{ env.MSRV }}"
targets: "wasm32-unknown-unknown"
- uses: actions/cache@v4
with:
path: |
@@ -99,8 +105,6 @@ jobs:
key: lldap-ui-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
lldap-ui-
- name: Add wasm target (rust)
run: rustup target add wasm32-unknown-unknown
- name: Install wasm-pack with cargo
run: cargo install wasm-pack || true
env:
@@ -133,6 +137,12 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v5.0.0
- name: Install Rust
id: toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: "${{ env.MSRV }}"
targets: "${{ matrix.target }}"
- uses: actions/cache@v4
with:
path: |
+2 -2
View File
@@ -8,7 +8,7 @@ on:
env:
CARGO_TERM_COLOR: always
MSRV: 1.85.0
MSRV: "1.89.0"
jobs:
pre_job:
@@ -42,7 +42,7 @@ jobs:
toolchain: "${{ env.MSRV }}"
- uses: Swatinem/rust-cache@v2
- name: Build
run: cargo build --verbose --workspace
run: cargo +${{steps.toolchain.outputs.name}} build --verbose --workspace
- name: Run tests
run: cargo +${{steps.toolchain.outputs.name}} test --verbose --workspace
- name: Generate GraphQL schema
+5
View File
@@ -29,3 +29,8 @@ recipe.json
lldap_config.toml
cert.pem
key.pem
# Nix
result
result-*
.direnv
+3 -1
View File
@@ -46,7 +46,9 @@ advanced guides (scripting, migrations, ...) you can contribute to.
### Code
If you don't know what to start with, check out the
[good first issues](https://github.com/lldap/lldap/labels/good%20first%20issue).
[good first issues](https://github.com/lldap/lldap/labels/good%20first%20issue).
For an alternative development environment setup, see [docs/nix-development.md](docs/nix-development.md).
Otherwise, if you want to fix a specific bug or implement a feature, make sure
to start by creating an issue for it (if it doesn't already exist). There, we
Generated
+89 -64
View File
@@ -686,7 +686,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "531a9155a481e2ee699d4f98f43c0ca4ff8ee1bfd55c31e9e98fb29d2b176fe0"
dependencies = [
"memchr",
"regex-automata 0.4.8",
"regex-automata",
"serde",
]
@@ -1626,6 +1626,18 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "getrandom"
version = "0.3.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26145e563e54f2cadc477553f1ec5ee650b00862f0a58bcd12cbdc5f0ea2d2f4"
dependencies = [
"cfg-if",
"libc",
"r-efi",
"wasi 0.14.5+wasi-0.2.4",
]
[[package]]
name = "gimli"
version = "0.31.1"
@@ -2302,10 +2314,11 @@ checksum = "f5d4a7da358eff58addd2877a45865158f0d78c911d43a5784ceb7bbf52833b0"
[[package]]
name = "js-sys"
version = "0.3.72"
version = "0.3.77"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a88f1bda2bd75b0452a14784937d796722fdebfe50df998aeb3f0b7603019a9"
checksum = "1cfaf33c695fc6e08064efbc1f72ec937429614f25eef83af942d0e227c3a28f"
dependencies = [
"once_cell",
"wasm-bindgen",
]
@@ -2438,7 +2451,7 @@ dependencies = [
"thiserror 1.0.66",
"tokio-util",
"tracing",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2532,7 +2545,7 @@ dependencies = [
"futures-util",
"graphql_client 0.11.0",
"hmac 0.12.1",
"http 0.2.12",
"http 1.1.0",
"juniper",
"jwt 0.16.0",
"ldap3",
@@ -2581,7 +2594,7 @@ dependencies = [
"tracing-subscriber",
"url",
"urlencoding",
"uuid 1.11.0",
"uuid 1.18.1",
"webpki-roots 0.22.6",
]
@@ -2649,6 +2662,7 @@ dependencies = [
"serde",
"sha2 0.9.9",
"thiserror 2.0.12",
"uuid 1.18.1",
]
[[package]]
@@ -2669,7 +2683,7 @@ dependencies = [
"serde",
"serde_bytes",
"strum 0.25.0",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2687,7 +2701,7 @@ dependencies = [
"pretty_assertions",
"serde",
"serde_bytes",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2706,7 +2720,7 @@ dependencies = [
"serde",
"serde_bytes",
"thiserror 2.0.12",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2739,7 +2753,7 @@ dependencies = [
"tokio",
"tracing",
"urlencoding",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2763,7 +2777,7 @@ dependencies = [
"rand 0.8.5",
"tokio",
"tracing",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2837,7 +2851,7 @@ dependencies = [
"tokio",
"tracing",
"tracing-subscriber",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2853,7 +2867,7 @@ dependencies = [
"lldap_opaque_handler",
"mockall",
"tracing",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -2895,11 +2909,11 @@ checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
[[package]]
name = "matchers"
version = "0.1.0"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8263075bb86c5a1b1427b5ae862e8889656f126e9f77c484496e8b47cf5c5558"
checksum = "d1525a2a28c7f4fa0fc98bb91ae755d1e2d1505079e05539e35bc876b5d65ae9"
dependencies = [
"regex-automata 0.1.10",
"regex-automata",
]
[[package]]
@@ -3053,12 +3067,11 @@ checksum = "61807f77802ff30975e01f4f071c8ba10c022052f98b3294119f3e615d13e5be"
[[package]]
name = "nu-ansi-term"
version = "0.46.0"
version = "0.50.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "77a8165726e8236064dbb45459242600304b42a5ea24ee2948e18e023bf7ba84"
checksum = "d4a28e057d01f97e61255210fcff094d74ed0466038633e95017f5beb68e4399"
dependencies = [
"overload",
"winapi",
"windows-sys 0.52.0",
]
[[package]]
@@ -3227,12 +3240,6 @@ dependencies = [
"syn 2.0.100",
]
[[package]]
name = "overload"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b15813163c1d831bf4a13c3610c05c0d03b39feb07f7e09fa234dac9b15aaf39"
[[package]]
name = "parking"
version = "2.2.1"
@@ -3541,6 +3548,12 @@ version = "0.4.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5a3866219251662ec3b26fc217e3e05bf9c4f84325234dfb96bf0bf840889e49"
[[package]]
name = "r-efi"
version = "5.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69cdb34c158ceb288df11e18b4bd39de994f6657d83847bdffdbd7f346754b0f"
[[package]]
name = "rand"
version = "0.7.3"
@@ -3629,17 +3642,8 @@ checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
dependencies = [
"aho-corasick",
"memchr",
"regex-automata 0.4.8",
"regex-syntax 0.8.5",
]
[[package]]
name = "regex-automata"
version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c230d73fb8d8c1b9c0b3135c5142a8acee3a0558fb8db5cf1cb65f8d7862132"
dependencies = [
"regex-syntax 0.6.29",
"regex-automata",
"regex-syntax",
]
[[package]]
@@ -3650,7 +3654,7 @@ checksum = "368758f23274712b504848e9d5a6f010445cc8b87a7cdb4d7cbee666c1288da3"
dependencies = [
"aho-corasick",
"memchr",
"regex-syntax 0.8.5",
"regex-syntax",
]
[[package]]
@@ -3659,12 +3663,6 @@ version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53a49587ad06b26609c52e423de037e7f57f20d53535d66e08c695f347df952a"
[[package]]
name = "regex-syntax"
version = "0.6.29"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
[[package]]
name = "regex-syntax"
version = "0.8.5"
@@ -4023,7 +4021,7 @@ dependencies = [
"thiserror 2.0.12",
"tracing",
"url",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -4049,7 +4047,7 @@ dependencies = [
"chrono",
"inherent",
"ordered-float",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -4061,7 +4059,7 @@ dependencies = [
"chrono",
"sea-query",
"sqlx",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -4419,7 +4417,7 @@ dependencies = [
"tokio-stream",
"tracing",
"url",
"uuid 1.11.0",
"uuid 1.18.1",
"webpki-roots 0.26.8",
]
@@ -4502,7 +4500,7 @@ dependencies = [
"stringprep",
"thiserror 2.0.12",
"tracing",
"uuid 1.11.0",
"uuid 1.18.1",
"whoami",
]
@@ -4541,7 +4539,7 @@ dependencies = [
"stringprep",
"thiserror 2.0.12",
"tracing",
"uuid 1.11.0",
"uuid 1.18.1",
"whoami",
]
@@ -4567,7 +4565,7 @@ dependencies = [
"sqlx-core",
"tracing",
"url",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
@@ -4936,9 +4934,9 @@ checksum = "8df9b6e13f2d32c91b9bd719c00d1958837bc7dec474d94952798cc8e69eeec3"
[[package]]
name = "tracing"
version = "0.1.40"
version = "0.1.41"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c3523ab5a71916ccf420eebdf5521fcef02141234bbc0b8a49f2fdc4544364ef"
checksum = "784e0ac535deb450455cbfa28a6f0df145ea1bb7ae51b821cf5e7927fdcfbdd0"
dependencies = [
"log",
"pin-project-lite",
@@ -4956,14 +4954,14 @@ dependencies = [
"mutually_exclusive_features",
"pin-project",
"tracing",
"uuid 1.11.0",
"uuid 1.18.1",
]
[[package]]
name = "tracing-attributes"
version = "0.1.27"
version = "0.1.30"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "34704c8d6ebcbc939824180af020566b01a7c01f80641264eba0999f6c2b6be7"
checksum = "81383ab64e72a7a8b8e13130c49e3dab29def6d0c7d76a03087b3cf71c5c6903"
dependencies = [
"proc-macro2",
"quote",
@@ -4972,9 +4970,9 @@ dependencies = [
[[package]]
name = "tracing-core"
version = "0.1.32"
version = "0.1.34"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c06d3da6113f116aaee68e4d601191614c9053067f9ab7f6edbcb161237daa54"
checksum = "b9d12581f227e93f094d3af2ae690a574abb8a2b9b7a96e7cfe9647b2b617678"
dependencies = [
"once_cell",
"valuable",
@@ -5007,14 +5005,14 @@ dependencies = [
[[package]]
name = "tracing-subscriber"
version = "0.3.18"
version = "0.3.20"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad0f048c97dbd9faa9b7df56362b8ebcaa52adb06b498c050d2f4e32f90a7a8b"
checksum = "2054a14f5307d601f88daf0553e1cbf472acc4f2c51afab632431cdcd72124d5"
dependencies = [
"matchers",
"nu-ansi-term",
"once_cell",
"regex",
"regex-automata",
"sharded-slab",
"smallvec",
"thread_local",
@@ -5163,13 +5161,16 @@ checksum = "bc5cf98d8186244414c848017f0e2676b3fcb46807f6668a97dfe67359a3c4b7"
[[package]]
name = "uuid"
version = "1.11.0"
version = "1.18.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8c5f0a0af699448548ad1a2fbf920fb4bee257eae39953ba95cb84891a0446a"
checksum = "2f87b8aa10b915a06587d0dec516c282ff295b475d94abf425d62b57710070a2"
dependencies = [
"atomic",
"getrandom 0.2.15",
"getrandom 0.3.3",
"js-sys",
"md-5",
"serde",
"wasm-bindgen",
]
[[package]]
@@ -5274,6 +5275,24 @@ version = "0.11.0+wasi-snapshot-preview1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
[[package]]
name = "wasi"
version = "0.14.5+wasi-0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4494f6290a82f5fe584817a676a34b9d6763e8d9d18204009fb31dceca98fd4"
dependencies = [
"wasip2",
]
[[package]]
name = "wasip2"
version = "1.0.0+wasi-0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03fa2761397e5bd52002cd7e73110c71af2109aca4e521a9f40473fe685b0a24"
dependencies = [
"wit-bindgen",
]
[[package]]
name = "wasite"
version = "0.1.0"
@@ -5611,6 +5630,12 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "wit-bindgen"
version = "0.45.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5c573471f125075647d03df72e026074b7203790d41351cd6edc96f46bcccd36"
[[package]]
name = "x509-parser"
version = "0.15.1"
+1
View File
@@ -16,6 +16,7 @@ edition = "2024"
homepage = "https://github.com/lldap/lldap"
license = "GPL-3.0-only"
repository = "https://github.com/lldap/lldap"
rust-version = "1.89.0"
[profile.release]
lto = true
+1 -1
View File
@@ -145,7 +145,7 @@ the relevant details (logs of the service, LLDAP logs with `verbose=true` in
the config).
Some specific clients have been tested to work and come with sample
configuration files, or guides. See the [`example_configs`](example_configs)
configuration files, or guides. See the [`example_configs`](example_configs/README.md)
folder for example configs for integration with specific services.
Integration with Linux accounts is possible, through PAM and nslcd. See [PAM
+1
View File
@@ -8,6 +8,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = "1"
+18 -16
View File
@@ -197,17 +197,19 @@ impl App {
<CreateUserForm/>
},
AppRoute::Index | AppRoute::ListUsers => {
let user_button = html! {
<Link classes="btn btn-primary" to={AppRoute::CreateUser}>
<i class="bi-person-plus me-2"></i>
{"Create a user"}
</Link>
let user_button = |key| {
html! {
<Link classes="btn btn-primary" key={key} to={AppRoute::CreateUser}>
<i class="bi-person-plus me-2"></i>
{"Create a user"}
</Link>
}
};
html! {
<div>
{ user_button.clone() }
{ user_button("top-create-user") }
<UserTable />
{ user_button }
{ user_button("bottom-create-user") }
</div>
}
}
@@ -221,19 +223,19 @@ impl App {
<CreateGroupAttributeForm/>
},
AppRoute::ListGroups => {
let group_button = html! {
<Link classes="btn btn-primary" to={AppRoute::CreateGroup}>
<i class="bi-plus-circle me-2"></i>
{"Create a group"}
</Link>
let group_button = |key| {
html! {
<Link classes="btn btn-primary" key={key} to={AppRoute::CreateGroup}>
<i class="bi-plus-circle me-2"></i>
{"Create a group"}
</Link>
}
};
// Note: There's a weird bug when switching from the users page to the groups page
// where the two groups buttons are at the bottom. I don't know why.
html! {
<div>
{ group_button.clone() }
{ group_button("top-create-group") }
<GroupTable />
{ group_button }
{ group_button("bottom-create-group") }
</div>
}
}
+12 -14
View File
@@ -147,20 +147,18 @@ impl Component for JpegFileInput {
true
}
Msg::FileLoaded(file_name, data) => {
if let Some(avatar) = &mut self.avatar {
if let Some(file) = &avatar.file {
if file.name() == file_name {
if let Result::Ok(data) = data {
if !is_valid_jpeg(data.as_slice()) {
// Clear the selection.
self.avatar = Some(JsFile::default());
// TODO: bail!("Chosen image is not a valid JPEG");
} else {
avatar.contents = Some(data);
return true;
}
}
}
if let Some(avatar) = &mut self.avatar
&& let Some(file) = &avatar.file
&& file.name() == file_name
&& let Result::Ok(data) = data
{
if !is_valid_jpeg(data.as_slice()) {
// Clear the selection.
self.avatar = Some(JsFile::default());
// TODO: bail!("Chosen image is not a valid JPEG");
} else {
avatar.contents = Some(data);
return true;
}
}
self.reader = None;
+6 -21
View File
@@ -8,17 +8,12 @@ pub mod group {
use super::AttributeDescription;
pub fn resolve_group_attribute_description(name: &str) -> Option<AttributeDescription> {
pub fn resolve_group_attribute_description(name: &str) -> Option<AttributeDescription<'_>> {
match name {
"creation_date" => Some(AttributeDescription {
attribute_identifier: name,
attribute_name: "creationdate",
aliases: vec![name, "createtimestamp"],
}),
"modified_date" => Some(AttributeDescription {
attribute_identifier: name,
attribute_name: "modifydate",
aliases: vec![name, "modifytimestamp"],
aliases: vec![name, "createtimestamp", "modifytimestamp"],
}),
"display_name" => Some(AttributeDescription {
attribute_identifier: name,
@@ -39,7 +34,7 @@ pub mod group {
}
}
pub fn resolve_group_attribute_description_or_default(name: &str) -> AttributeDescription {
pub fn resolve_group_attribute_description_or_default(name: &str) -> AttributeDescription<'_> {
match resolve_group_attribute_description(name) {
Some(d) => d,
None => AttributeDescription {
@@ -55,7 +50,7 @@ pub mod user {
use super::AttributeDescription;
pub fn resolve_user_attribute_description(name: &str) -> Option<AttributeDescription> {
pub fn resolve_user_attribute_description(name: &str) -> Option<AttributeDescription<'_>> {
match name {
"avatar" => Some(AttributeDescription {
attribute_identifier: name,
@@ -65,17 +60,7 @@ pub mod user {
"creation_date" => Some(AttributeDescription {
attribute_identifier: name,
attribute_name: "creationdate",
aliases: vec![name, "createtimestamp"],
}),
"modified_date" => Some(AttributeDescription {
attribute_identifier: name,
attribute_name: "modifydate",
aliases: vec![name, "modifytimestamp"],
}),
"password_modified_date" => Some(AttributeDescription {
attribute_identifier: name,
attribute_name: "passwordmodifydate",
aliases: vec![name, "pwdchangedtime"],
aliases: vec![name, "createtimestamp", "modifytimestamp"],
}),
"display_name" => Some(AttributeDescription {
attribute_identifier: name,
@@ -111,7 +96,7 @@ pub mod user {
}
}
pub fn resolve_user_attribute_description_or_default(name: &str) -> AttributeDescription {
pub fn resolve_user_attribute_description_or_default(name: &str) -> AttributeDescription<'_> {
match resolve_user_attribute_description(name) {
Some(d) => d,
None => AttributeDescription {
+1
View File
@@ -2,6 +2,7 @@
#![forbid(non_ascii_idents)]
#![allow(clippy::uninlined_format_args)]
#![allow(clippy::let_unit_value)]
#![allow(clippy::unnecessary_operation)] // Doesn't work well with the html macro.
pub mod components;
pub mod infra;
+1
View File
@@ -7,6 +7,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
tracing = "*"
+2
View File
@@ -7,6 +7,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[features]
default = ["opaque_server", "opaque_client"]
@@ -24,6 +25,7 @@ generic-array = "0.14"
rand = "0.8"
sha2 = "0.9"
thiserror = "2"
uuid = { version = "1.18.1", features = ["serde"] }
[dependencies.derive_more]
features = ["debug", "display"]
+4
View File
@@ -4,6 +4,7 @@ use chrono::prelude::*;
use serde::{Deserialize, Serialize};
use std::collections::HashSet;
use std::fmt;
use uuid::Uuid;
pub mod access_control;
pub mod opaque;
@@ -208,8 +209,11 @@ pub mod types {
#[derive(Clone, Serialize, Deserialize)]
pub struct JWTClaims {
#[serde(with = "chrono::serde::ts_seconds")]
pub exp: DateTime<Utc>,
#[serde(with = "chrono::serde::ts_seconds")]
pub iat: DateTime<Utc>,
pub jti: Uuid,
pub user: String,
pub groups: HashSet<String>,
}
+1
View File
@@ -6,6 +6,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[features]
test = []
+1
View File
@@ -6,6 +6,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[features]
test = []
+1
View File
@@ -9,6 +9,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[features]
test = []
+1
View File
@@ -7,6 +7,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies.serde]
workspace = true
+2 -1
View File
@@ -7,6 +7,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = "*"
@@ -72,4 +73,4 @@ path = "../test-utils"
[dev-dependencies.tokio]
features = ["full"]
version = "1.25"
version = "1.25"
@@ -0,0 +1,160 @@
use anyhow::{Context as AnyhowContext, anyhow};
use juniper::FieldResult;
use lldap_access_control::{AdminBackendHandler, ReadonlyBackendHandler};
use lldap_domain::{
deserialize::deserialize_attribute_value,
public_schema::PublicSchema,
requests::CreateGroupRequest,
schema::AttributeList,
types::{Attribute as DomainAttribute, AttributeName, Email},
};
use lldap_domain_handlers::handler::{BackendHandler, ReadSchemaBackendHandler};
use std::{collections::BTreeMap, sync::Arc};
use tracing::{Instrument, Span};
use super::inputs::AttributeValue;
use crate::api::{Context, field_error_callback};
pub struct UnpackedAttributes {
pub email: Option<Email>,
pub display_name: Option<String>,
pub attributes: Vec<DomainAttribute>,
}
pub fn unpack_attributes(
attributes: Vec<AttributeValue>,
schema: &PublicSchema,
is_admin: bool,
) -> FieldResult<UnpackedAttributes> {
let email = attributes
.iter()
.find(|attr| attr.name == "mail")
.cloned()
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.transpose()?
.map(|attr| attr.value.into_string().unwrap())
.map(Email::from);
let display_name = attributes
.iter()
.find(|attr| attr.name == "display_name")
.cloned()
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.transpose()?
.map(|attr| attr.value.into_string().unwrap());
let attributes = attributes
.into_iter()
.filter(|attr| attr.name != "mail" && attr.name != "display_name")
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.collect::<Result<Vec<_>, _>>()?;
Ok(UnpackedAttributes {
email,
display_name,
attributes,
})
}
/// Consolidates caller supplied user fields and attributes into a list of attributes.
///
/// A number of user fields are internally represented as attributes, but are still also
/// available as fields on user objects. This function consolidates these fields and the
/// given attributes into a resulting attribute list. If a value is supplied for both a
/// field and the corresponding attribute, the attribute will take precedence.
pub fn consolidate_attributes(
attributes: Vec<AttributeValue>,
first_name: Option<String>,
last_name: Option<String>,
avatar: Option<String>,
) -> Vec<AttributeValue> {
// Prepare map of the client provided attributes
let mut provided_attributes: BTreeMap<AttributeName, AttributeValue> = attributes
.into_iter()
.map(|x| {
(
x.name.clone().into(),
AttributeValue {
name: x.name.to_ascii_lowercase(),
value: x.value,
},
)
})
.collect::<BTreeMap<_, _>>();
// Prepare list of fallback attribute values
let field_attrs = [
("first_name", first_name),
("last_name", last_name),
("avatar", avatar),
];
for (name, value) in field_attrs.into_iter() {
if let Some(val) = value {
let attr_name: AttributeName = name.into();
provided_attributes
.entry(attr_name)
.or_insert_with(|| AttributeValue {
name: name.to_string(),
value: vec![val],
});
}
}
// Return the values of the resulting map
provided_attributes.into_values().collect()
}
pub async fn create_group_with_details<Handler: BackendHandler>(
context: &Context<Handler>,
request: super::inputs::CreateGroupInput,
span: Span,
) -> FieldResult<crate::query::Group<Handler>> {
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(&span, "Unauthorized group creation"))?;
let schema = handler.get_schema().await?;
let public_schema: PublicSchema = schema.into();
let attributes = request
.attributes
.unwrap_or_default()
.into_iter()
.map(|attr| deserialize_attribute(&public_schema.get_schema().group_attributes, attr, true))
.collect::<Result<Vec<_>, _>>()?;
let request = CreateGroupRequest {
display_name: request.display_name.into(),
attributes,
};
let group_id = handler.create_group(request).await?;
let group_details = handler.get_group_details(group_id).instrument(span).await?;
crate::query::Group::<Handler>::from_group_details(group_details, Arc::new(public_schema))
}
pub fn deserialize_attribute(
attribute_schema: &AttributeList,
attribute: AttributeValue,
is_admin: bool,
) -> FieldResult<DomainAttribute> {
let attribute_name = AttributeName::from(attribute.name.as_str());
let attribute_schema = attribute_schema
.get_attribute_schema(&attribute_name)
.ok_or_else(|| anyhow!("Attribute {} is not defined in the schema", attribute.name))?;
if attribute_schema.is_readonly {
return Err(anyhow!(
"Permission denied: Attribute {} is read-only",
attribute.name
)
.into());
}
if !is_admin && !attribute_schema.is_editable {
return Err(anyhow!(
"Permission denied: Attribute {} is not editable by regular users",
attribute.name
)
.into());
}
let deserialized_values = deserialize_attribute_value(
&attribute.value,
attribute_schema.attribute_type,
attribute_schema.is_list,
)
.context(format!("While deserializing attribute {}", attribute.name))?;
Ok(DomainAttribute {
name: attribute_name,
value: deserialized_values,
})
}
@@ -0,0 +1,99 @@
use juniper::{GraphQLInputObject, GraphQLObject};
#[derive(Clone, PartialEq, Eq, Debug, GraphQLInputObject)]
// This conflicts with the attribute values returned by the user/group queries.
#[graphql(name = "AttributeValueInput")]
pub struct AttributeValue {
/// The name of the attribute. It must be present in the schema, and the type informs how
/// to interpret the values.
pub name: String,
/// The values of the attribute.
/// If the attribute is not a list, the vector must contain exactly one element.
/// Integers (signed 64 bits) are represented as strings.
/// Dates are represented as strings in RFC3339 format, e.g. "2019-10-12T07:20:50.52Z".
/// JpegPhotos are represented as base64 encoded strings. They must be valid JPEGs.
pub value: Vec<String>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The details required to create a user.
pub struct CreateUserInput {
pub id: String,
// The email can be specified as an attribute, but one of the two is required.
pub email: Option<String>,
pub display_name: Option<String>,
/// First name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
pub first_name: Option<String>,
/// Last name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
pub last_name: Option<String>,
/// Base64 encoded JpegPhoto. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
pub avatar: Option<String>,
/// Attributes.
pub attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The details required to create a group.
pub struct CreateGroupInput {
pub display_name: String,
/// User-defined attributes.
pub attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The fields that can be updated for a user.
pub struct UpdateUserInput {
pub id: String,
pub email: Option<String>,
pub display_name: Option<String>,
/// First name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
pub first_name: Option<String>,
/// Last name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
pub last_name: Option<String>,
/// Base64 encoded JpegPhoto. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
pub avatar: Option<String>,
/// Attribute names to remove.
/// They are processed before insertions.
pub remove_attributes: Option<Vec<String>>,
/// Inserts or updates the given attributes.
/// For lists, the entire list must be provided.
pub insert_attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The fields that can be updated for a group.
pub struct UpdateGroupInput {
/// The group ID.
pub id: i32,
/// The new display name.
pub display_name: Option<String>,
/// Attribute names to remove.
/// They are processed before insertions.
pub remove_attributes: Option<Vec<String>>,
/// Inserts or updates the given attributes.
/// For lists, the entire list must be provided.
pub insert_attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLObject)]
pub struct Success {
ok: bool,
}
impl Success {
pub fn new() -> Self {
Self { ok: true }
}
}
impl Default for Success {
fn default() -> Self {
Self::new()
}
}
@@ -1,27 +1,30 @@
pub mod helpers;
pub mod inputs;
// Re-export public types
pub use inputs::{
AttributeValue, CreateGroupInput, CreateUserInput, Success, UpdateGroupInput, UpdateUserInput,
};
use crate::api::{Context, field_error_callback};
use anyhow::{Context as AnyhowContext, anyhow};
use juniper::{FieldError, FieldResult, GraphQLInputObject, GraphQLObject, graphql_object};
use anyhow::anyhow;
use juniper::{FieldError, FieldResult, graphql_object};
use lldap_access_control::{
AdminBackendHandler, ReadonlyBackendHandler, UserReadableBackendHandler,
UserWriteableBackendHandler,
AdminBackendHandler, UserReadableBackendHandler, UserWriteableBackendHandler,
};
use lldap_domain::{
deserialize::deserialize_attribute_value,
public_schema::PublicSchema,
requests::{
CreateAttributeRequest, CreateGroupRequest, CreateUserRequest, UpdateGroupRequest,
UpdateUserRequest,
},
schema::AttributeList,
types::{
Attribute as DomainAttribute, AttributeName, AttributeType, Email, GroupId,
LdapObjectClass, UserId,
},
requests::{CreateAttributeRequest, CreateUserRequest, UpdateGroupRequest, UpdateUserRequest},
types::{AttributeName, AttributeType, Email, GroupId, LdapObjectClass, UserId},
};
use lldap_domain_handlers::handler::BackendHandler;
use lldap_validation::attributes::{ALLOWED_CHARACTERS_DESCRIPTION, validate_attribute_name};
use std::{collections::BTreeMap, sync::Arc};
use tracing::{Instrument, Span, debug, debug_span};
use std::sync::Arc;
use tracing::{Instrument, debug, debug_span};
use helpers::{
UnpackedAttributes, consolidate_attributes, create_group_with_details, deserialize_attribute,
unpack_attributes,
};
#[derive(PartialEq, Eq, Debug)]
/// The top-level GraphQL mutation type.
@@ -42,183 +45,6 @@ impl<Handler: BackendHandler> Mutation<Handler> {
}
}
}
#[derive(Clone, PartialEq, Eq, Debug, GraphQLInputObject)]
// This conflicts with the attribute values returned by the user/group queries.
#[graphql(name = "AttributeValueInput")]
struct AttributeValue {
/// The name of the attribute. It must be present in the schema, and the type informs how
/// to interpret the values.
name: String,
/// The values of the attribute.
/// If the attribute is not a list, the vector must contain exactly one element.
/// Integers (signed 64 bits) are represented as strings.
/// Dates are represented as strings in RFC3339 format, e.g. "2019-10-12T07:20:50.52Z".
/// JpegPhotos are represented as base64 encoded strings. They must be valid JPEGs.
value: Vec<String>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The details required to create a user.
pub struct CreateUserInput {
id: String,
// The email can be specified as an attribute, but one of the two is required.
email: Option<String>,
display_name: Option<String>,
/// First name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
first_name: Option<String>,
/// Last name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
last_name: Option<String>,
/// Base64 encoded JpegPhoto. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
avatar: Option<String>,
/// Attributes.
attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The details required to create a group.
pub struct CreateGroupInput {
display_name: String,
/// User-defined attributes.
attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The fields that can be updated for a user.
pub struct UpdateUserInput {
id: String,
email: Option<String>,
display_name: Option<String>,
/// First name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
first_name: Option<String>,
/// Last name of user. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
last_name: Option<String>,
/// Base64 encoded JpegPhoto. Deprecated: use attribute instead.
/// If both field and corresponding attribute is supplied, the attribute will take precedence.
avatar: Option<String>,
/// Attribute names to remove.
/// They are processed before insertions.
remove_attributes: Option<Vec<String>>,
/// Inserts or updates the given attributes.
/// For lists, the entire list must be provided.
insert_attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// The fields that can be updated for a group.
pub struct UpdateGroupInput {
/// The group ID.
id: i32,
/// The new display name.
display_name: Option<String>,
/// Attribute names to remove.
/// They are processed before insertions.
remove_attributes: Option<Vec<String>>,
/// Inserts or updates the given attributes.
/// For lists, the entire list must be provided.
insert_attributes: Option<Vec<AttributeValue>>,
}
#[derive(PartialEq, Eq, Debug, GraphQLObject)]
pub struct Success {
ok: bool,
}
impl Success {
fn new() -> Self {
Self { ok: true }
}
}
struct UnpackedAttributes {
email: Option<Email>,
display_name: Option<String>,
attributes: Vec<DomainAttribute>,
}
fn unpack_attributes(
attributes: Vec<AttributeValue>,
schema: &PublicSchema,
is_admin: bool,
) -> FieldResult<UnpackedAttributes> {
let email = attributes
.iter()
.find(|attr| attr.name == "mail")
.cloned()
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.transpose()?
.map(|attr| attr.value.into_string().unwrap())
.map(Email::from);
let display_name = attributes
.iter()
.find(|attr| attr.name == "display_name")
.cloned()
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.transpose()?
.map(|attr| attr.value.into_string().unwrap());
let attributes = attributes
.into_iter()
.filter(|attr| attr.name != "mail" && attr.name != "display_name")
.map(|attr| deserialize_attribute(&schema.get_schema().user_attributes, attr, is_admin))
.collect::<Result<Vec<_>, _>>()?;
Ok(UnpackedAttributes {
email,
display_name,
attributes,
})
}
/// Consolidates caller supplied user fields and attributes into a list of attributes.
///
/// A number of user fields are internally represented as attributes, but are still also
/// available as fields on user objects. This function consolidates these fields and the
/// given attributes into a resulting attribute list. If a value is supplied for both a
/// field and the corresponding attribute, the attribute will take precedence.
fn consolidate_attributes(
attributes: Vec<AttributeValue>,
first_name: Option<String>,
last_name: Option<String>,
avatar: Option<String>,
) -> Vec<AttributeValue> {
// Prepare map of the client provided attributes
let mut provided_attributes: BTreeMap<AttributeName, AttributeValue> = attributes
.into_iter()
.map(|x| {
(
x.name.clone().into(),
AttributeValue {
name: x.name.to_ascii_lowercase(),
value: x.value,
},
)
})
.collect::<BTreeMap<_, _>>();
// Prepare list of fallback attribute values
let field_attrs = [
("first_name", first_name),
("last_name", last_name),
("avatar", avatar),
];
for (name, value) in field_attrs.into_iter() {
if let Some(val) = value {
let attr_name: AttributeName = name.into();
provided_attributes
.entry(attr_name)
.or_insert_with(|| AttributeValue {
name: name.to_string(),
value: vec![val],
});
}
}
// Return the values of the resulting map
provided_attributes.into_values().collect()
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> Mutation<Handler> {
async fn create_user(
@@ -721,66 +547,6 @@ impl<Handler: BackendHandler> Mutation<Handler> {
Ok(Success::new())
}
}
async fn create_group_with_details<Handler: BackendHandler>(
context: &Context<Handler>,
request: CreateGroupInput,
span: Span,
) -> FieldResult<super::query::Group<Handler>> {
let handler = context
.get_admin_handler()
.ok_or_else(field_error_callback(&span, "Unauthorized group creation"))?;
let schema = handler.get_schema().await?;
let attributes = request
.attributes
.unwrap_or_default()
.into_iter()
.map(|attr| deserialize_attribute(&schema.get_schema().group_attributes, attr, true))
.collect::<Result<Vec<_>, _>>()?;
let request = CreateGroupRequest {
display_name: request.display_name.into(),
attributes,
};
let group_id = handler.create_group(request).await?;
let group_details = handler.get_group_details(group_id).instrument(span).await?;
super::query::Group::<Handler>::from_group_details(group_details, Arc::new(schema))
}
fn deserialize_attribute(
attribute_schema: &AttributeList,
attribute: AttributeValue,
is_admin: bool,
) -> FieldResult<DomainAttribute> {
let attribute_name = AttributeName::from(attribute.name.as_str());
let attribute_schema = attribute_schema
.get_attribute_schema(&attribute_name)
.ok_or_else(|| anyhow!("Attribute {} is not defined in the schema", attribute.name))?;
if attribute_schema.is_readonly {
return Err(anyhow!(
"Permission denied: Attribute {} is read-only",
attribute.name
)
.into());
}
if !is_admin && !attribute_schema.is_editable {
return Err(anyhow!(
"Permission denied: Attribute {} is not editable by regular users",
attribute.name
)
.into());
}
let deserialized_values = deserialize_attribute_value(
&attribute.value,
attribute_schema.attribute_type,
attribute_schema.is_list,
)
.context(format!("While deserializing attribute {}", attribute.name))?;
Ok(DomainAttribute {
name: attribute_name,
value: deserialized_values,
})
}
#[cfg(test)]
mod tests {
use super::*;
File diff suppressed because it is too large Load Diff
@@ -0,0 +1,267 @@
use chrono::TimeZone;
use juniper::{FieldResult, graphql_object};
use lldap_domain::public_schema::PublicSchema;
use lldap_domain::schema::AttributeList as DomainAttributeList;
use lldap_domain::schema::AttributeSchema as DomainAttributeSchema;
use lldap_domain::types::{Attribute as DomainAttribute, AttributeValue as DomainAttributeValue};
use lldap_domain::types::{Cardinality, Group as DomainGroup, GroupDetails, User as DomainUser};
use lldap_domain_handlers::handler::BackendHandler;
use serde::{Deserialize, Serialize};
use crate::api::Context;
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct AttributeSchema<Handler: BackendHandler> {
schema: DomainAttributeSchema,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> AttributeSchema<Handler> {
fn name(&self) -> String {
self.schema.name.to_string()
}
fn attribute_type(&self) -> lldap_domain::types::AttributeType {
self.schema.attribute_type
}
fn is_list(&self) -> bool {
self.schema.is_list
}
fn is_visible(&self) -> bool {
self.schema.is_visible
}
fn is_editable(&self) -> bool {
self.schema.is_editable
}
fn is_hardcoded(&self) -> bool {
self.schema.is_hardcoded
}
fn is_readonly(&self) -> bool {
self.schema.is_readonly
}
}
impl<Handler: BackendHandler> Clone for AttributeSchema<Handler> {
fn clone(&self) -> Self {
Self {
schema: self.schema.clone(),
_phantom: std::marker::PhantomData,
}
}
}
impl<Handler: BackendHandler> From<DomainAttributeSchema> for AttributeSchema<Handler> {
fn from(value: DomainAttributeSchema) -> Self {
Self {
schema: value,
_phantom: std::marker::PhantomData,
}
}
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct AttributeValue<Handler: BackendHandler> {
pub(super) attribute: DomainAttribute,
pub(super) schema: AttributeSchema<Handler>,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> AttributeValue<Handler> {
fn name(&self) -> &str {
self.attribute.name.as_str()
}
fn value(&self) -> FieldResult<Vec<String>> {
Ok(serialize_attribute_to_graphql(&self.attribute.value))
}
fn schema(&self) -> &AttributeSchema<Handler> {
&self.schema
}
}
impl<Handler: BackendHandler> AttributeValue<Handler> {
fn from_value(attr: DomainAttribute, schema: DomainAttributeSchema) -> Self {
Self {
attribute: attr,
schema: AttributeSchema::<Handler> {
schema,
_phantom: std::marker::PhantomData,
},
_phantom: std::marker::PhantomData,
}
}
pub(super) fn name(&self) -> &str {
self.attribute.name.as_str()
}
}
impl<Handler: BackendHandler> Clone for AttributeValue<Handler> {
fn clone(&self) -> Self {
Self {
attribute: self.attribute.clone(),
schema: self.schema.clone(),
_phantom: std::marker::PhantomData,
}
}
}
pub fn serialize_attribute_to_graphql(attribute_value: &DomainAttributeValue) -> Vec<String> {
let convert_date = |&date| chrono::Utc.from_utc_datetime(&date).to_rfc3339();
match attribute_value {
DomainAttributeValue::String(Cardinality::Singleton(s)) => vec![s.clone()],
DomainAttributeValue::String(Cardinality::Unbounded(l)) => l.clone(),
DomainAttributeValue::Integer(Cardinality::Singleton(i)) => vec![i.to_string()],
DomainAttributeValue::Integer(Cardinality::Unbounded(l)) => {
l.iter().map(|i| i.to_string()).collect()
}
DomainAttributeValue::DateTime(Cardinality::Singleton(dt)) => vec![convert_date(dt)],
DomainAttributeValue::DateTime(Cardinality::Unbounded(l)) => {
l.iter().map(convert_date).collect()
}
DomainAttributeValue::JpegPhoto(Cardinality::Singleton(p)) => vec![String::from(p)],
DomainAttributeValue::JpegPhoto(Cardinality::Unbounded(l)) => {
l.iter().map(String::from).collect()
}
}
}
impl<Handler: BackendHandler> AttributeValue<Handler> {
fn from_schema(a: DomainAttribute, schema: &DomainAttributeList) -> Option<Self> {
schema
.get_attribute_schema(&a.name)
.map(|s| AttributeValue::<Handler>::from_value(a, s.clone()))
}
pub fn user_attributes_from_schema(
user: &mut DomainUser,
schema: &PublicSchema,
) -> Vec<AttributeValue<Handler>> {
let user_attributes = std::mem::take(&mut user.attributes);
let mut all_attributes = schema
.get_schema()
.user_attributes
.attributes
.iter()
.filter(|a| a.is_hardcoded)
.flat_map(|attribute_schema| {
let value: Option<DomainAttributeValue> = match attribute_schema.name.as_str() {
"user_id" => Some(user.user_id.clone().into_string().into()),
"creation_date" => Some(user.creation_date.into()),
"modified_date" => Some(user.modified_date.into()),
"password_modified_date" => Some(user.password_modified_date.into()),
"mail" => Some(user.email.clone().into_string().into()),
"uuid" => Some(user.uuid.clone().into_string().into()),
"display_name" => user.display_name.as_ref().map(|d| d.clone().into()),
"avatar" | "first_name" | "last_name" => None,
_ => panic!("Unexpected hardcoded attribute: {}", attribute_schema.name),
};
value.map(|v| (attribute_schema, v))
})
.map(|(attribute_schema, value)| {
AttributeValue::<Handler>::from_value(
DomainAttribute {
name: attribute_schema.name.clone(),
value,
},
attribute_schema.clone(),
)
})
.collect::<Vec<_>>();
user_attributes
.into_iter()
.flat_map(|a| {
AttributeValue::<Handler>::from_schema(a, &schema.get_schema().user_attributes)
})
.for_each(|value| all_attributes.push(value));
all_attributes
}
pub fn group_attributes_from_schema(
group: &mut DomainGroup,
schema: &PublicSchema,
) -> Vec<AttributeValue<Handler>> {
let group_attributes = std::mem::take(&mut group.attributes);
let mut all_attributes = schema
.get_schema()
.group_attributes
.attributes
.iter()
.filter(|a| a.is_hardcoded)
.map(|attribute_schema| {
(
attribute_schema,
match attribute_schema.name.as_str() {
"group_id" => (group.id.0 as i64).into(),
"creation_date" => group.creation_date.into(),
"modified_date" => group.modified_date.into(),
"uuid" => group.uuid.clone().into_string().into(),
"display_name" => group.display_name.clone().into_string().into(),
_ => panic!("Unexpected hardcoded attribute: {}", attribute_schema.name),
},
)
})
.map(|(attribute_schema, value)| {
AttributeValue::<Handler>::from_value(
DomainAttribute {
name: attribute_schema.name.clone(),
value,
},
attribute_schema.clone(),
)
})
.collect::<Vec<_>>();
group_attributes
.into_iter()
.flat_map(|a| {
AttributeValue::<Handler>::from_schema(a, &schema.get_schema().group_attributes)
})
.for_each(|value| all_attributes.push(value));
all_attributes
}
pub fn group_details_attributes_from_schema(
group: &mut GroupDetails,
schema: &PublicSchema,
) -> Vec<AttributeValue<Handler>> {
let group_attributes = std::mem::take(&mut group.attributes);
let mut all_attributes = schema
.get_schema()
.group_attributes
.attributes
.iter()
.filter(|a| a.is_hardcoded)
.map(|attribute_schema| {
(
attribute_schema,
match attribute_schema.name.as_str() {
"group_id" => (group.group_id.0 as i64).into(),
"creation_date" => group.creation_date.into(),
"modified_date" => group.modified_date.into(),
"uuid" => group.uuid.clone().into_string().into(),
"display_name" => group.display_name.clone().into_string().into(),
_ => panic!("Unexpected hardcoded attribute: {}", attribute_schema.name),
},
)
})
.map(|(attribute_schema, value)| {
AttributeValue::<Handler>::from_value(
DomainAttribute {
name: attribute_schema.name.clone(),
value,
},
attribute_schema.clone(),
)
})
.collect::<Vec<_>>();
group_attributes
.into_iter()
.flat_map(|a| {
AttributeValue::<Handler>::from_schema(a, &schema.get_schema().group_attributes)
})
.for_each(|value| all_attributes.push(value));
all_attributes
}
}
@@ -0,0 +1,89 @@
use anyhow::Context as AnyhowContext;
use juniper::{FieldResult, GraphQLInputObject};
use lldap_domain::deserialize::deserialize_attribute_value;
use lldap_domain::public_schema::PublicSchema;
use lldap_domain::types::GroupId;
use lldap_domain::types::UserId;
use lldap_domain_handlers::handler::UserRequestFilter as DomainRequestFilter;
use lldap_domain_model::model::UserColumn;
use lldap_ldap::{UserFieldType, map_user_field};
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
/// A filter for requests, specifying a boolean expression based on field constraints. Only one of
/// the fields can be set at a time.
pub struct RequestFilter {
any: Option<Vec<RequestFilter>>,
all: Option<Vec<RequestFilter>>,
not: Option<Box<RequestFilter>>,
eq: Option<EqualityConstraint>,
member_of: Option<String>,
member_of_id: Option<i32>,
}
impl RequestFilter {
pub fn try_into_domain_filter(self, schema: &PublicSchema) -> FieldResult<DomainRequestFilter> {
match (
self.eq,
self.any,
self.all,
self.not,
self.member_of,
self.member_of_id,
) {
(Some(eq), None, None, None, None, None) => {
match map_user_field(&eq.field.as_str().into(), schema) {
UserFieldType::NoMatch => {
Err(format!("Unknown request filter: {}", &eq.field).into())
}
UserFieldType::PrimaryField(UserColumn::UserId) => {
Ok(DomainRequestFilter::UserId(UserId::new(&eq.value)))
}
UserFieldType::PrimaryField(column) => {
Ok(DomainRequestFilter::Equality(column, eq.value))
}
UserFieldType::Attribute(name, typ, false) => {
let value = deserialize_attribute_value(&[eq.value], typ, false)
.context(format!("While deserializing attribute {}", &name))?;
Ok(DomainRequestFilter::AttributeEquality(name, value))
}
UserFieldType::Attribute(_, _, true) => {
Err("Equality not supported for list fields".into())
}
UserFieldType::MemberOf => Ok(DomainRequestFilter::MemberOf(eq.value.into())),
UserFieldType::ObjectClass | UserFieldType::Dn | UserFieldType::EntryDn => {
Err("Ldap fields not supported in request filter".into())
}
}
}
(None, Some(any), None, None, None, None) => Ok(DomainRequestFilter::Or(
any.into_iter()
.map(|f| f.try_into_domain_filter(schema))
.collect::<FieldResult<Vec<_>>>()?,
)),
(None, None, Some(all), None, None, None) => Ok(DomainRequestFilter::And(
all.into_iter()
.map(|f| f.try_into_domain_filter(schema))
.collect::<FieldResult<Vec<_>>>()?,
)),
(None, None, None, Some(not), None, None) => Ok(DomainRequestFilter::Not(Box::new(
(*not).try_into_domain_filter(schema)?,
))),
(None, None, None, None, Some(group), None) => {
Ok(DomainRequestFilter::MemberOf(group.into()))
}
(None, None, None, None, None, Some(group_id)) => {
Ok(DomainRequestFilter::MemberOfId(GroupId(group_id)))
}
(None, None, None, None, None, None) => {
Err("No field specified in request filter".into())
}
_ => Err("Multiple fields specified in request filter".into()),
}
}
}
#[derive(PartialEq, Eq, Debug, GraphQLInputObject)]
pub struct EqualityConstraint {
field: String,
value: String,
}
+123
View File
@@ -0,0 +1,123 @@
use chrono::TimeZone;
use juniper::{FieldResult, graphql_object};
use lldap_access_control::ReadonlyBackendHandler;
use lldap_domain::public_schema::PublicSchema;
use lldap_domain::types::{Group as DomainGroup, GroupDetails, GroupId};
use lldap_domain_handlers::handler::{BackendHandler, UserRequestFilter as DomainRequestFilter};
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tracing::{Instrument, debug, debug_span};
use super::attribute::AttributeValue;
use super::user::User;
use crate::api::{Context, field_error_callback};
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
/// Represents a single group.
pub struct Group<Handler: BackendHandler> {
pub group_id: i32,
pub display_name: String,
creation_date: chrono::NaiveDateTime,
uuid: String,
attributes: Vec<AttributeValue<Handler>>,
pub schema: Arc<PublicSchema>,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
impl<Handler: BackendHandler> Group<Handler> {
pub fn from_group(
mut group: DomainGroup,
schema: Arc<PublicSchema>,
) -> FieldResult<Group<Handler>> {
let attributes =
AttributeValue::<Handler>::group_attributes_from_schema(&mut group, &schema);
Ok(Self {
group_id: group.id.0,
display_name: group.display_name.to_string(),
creation_date: group.creation_date,
uuid: group.uuid.into_string(),
attributes,
schema,
_phantom: std::marker::PhantomData,
})
}
pub fn from_group_details(
mut group_details: GroupDetails,
schema: Arc<PublicSchema>,
) -> FieldResult<Group<Handler>> {
let attributes = AttributeValue::<Handler>::group_details_attributes_from_schema(
&mut group_details,
&schema,
);
Ok(Self {
group_id: group_details.group_id.0,
display_name: group_details.display_name.to_string(),
creation_date: group_details.creation_date,
uuid: group_details.uuid.into_string(),
attributes,
schema,
_phantom: std::marker::PhantomData,
})
}
}
impl<Handler: BackendHandler> Clone for Group<Handler> {
fn clone(&self) -> Self {
Self {
group_id: self.group_id,
display_name: self.display_name.clone(),
creation_date: self.creation_date,
uuid: self.uuid.clone(),
attributes: self.attributes.clone(),
schema: self.schema.clone(),
_phantom: std::marker::PhantomData,
}
}
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> Group<Handler> {
fn id(&self) -> i32 {
self.group_id
}
fn display_name(&self) -> String {
self.display_name.clone()
}
fn creation_date(&self) -> chrono::DateTime<chrono::Utc> {
chrono::Utc.from_utc_datetime(&self.creation_date)
}
fn uuid(&self) -> String {
self.uuid.clone()
}
/// User-defined attributes.
fn attributes(&self) -> &[AttributeValue<Handler>] {
&self.attributes
}
/// The groups to which this user belongs.
async fn users(&self, context: &Context<Handler>) -> FieldResult<Vec<User<Handler>>> {
let span = debug_span!("[GraphQL query] group::users");
span.in_scope(|| {
debug!(name = %self.display_name);
});
let handler = context
.get_readonly_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized access to group data",
))?;
let domain_users = handler
.list_users(
Some(DomainRequestFilter::MemberOfId(GroupId(self.group_id))),
false,
)
.instrument(span)
.await?;
domain_users
.into_iter()
.map(|u| User::<Handler>::from_user_and_groups(u, self.schema.clone()))
.collect()
}
}
+539
View File
@@ -0,0 +1,539 @@
pub mod attribute;
pub mod filters;
pub mod group;
pub mod schema;
pub mod user;
// Re-export public types
pub use attribute::{AttributeSchema, AttributeValue, serialize_attribute_to_graphql};
pub use filters::{EqualityConstraint, RequestFilter};
pub use group::Group;
pub use schema::{AttributeList, ObjectClassInfo, Schema};
pub use user::User;
use juniper::{FieldResult, graphql_object};
use lldap_access_control::{ReadonlyBackendHandler, UserReadableBackendHandler};
use lldap_domain::public_schema::PublicSchema;
use lldap_domain::types::{GroupId, UserId};
use lldap_domain_handlers::handler::{BackendHandler, ReadSchemaBackendHandler};
use std::sync::Arc;
use tracing::{Instrument, Span, debug, debug_span};
use crate::api::{Context, field_error_callback};
#[derive(PartialEq, Eq, Debug)]
/// The top-level GraphQL query type.
pub struct Query<Handler: BackendHandler> {
_phantom: std::marker::PhantomData<Box<Handler>>,
}
impl<Handler: BackendHandler> Default for Query<Handler> {
fn default() -> Self {
Self::new()
}
}
impl<Handler: BackendHandler> Query<Handler> {
pub fn new() -> Self {
Self {
_phantom: std::marker::PhantomData,
}
}
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> Query<Handler> {
fn api_version() -> &'static str {
"1.0"
}
pub async fn user(context: &Context<Handler>, user_id: String) -> FieldResult<User<Handler>> {
use anyhow::Context;
let span = debug_span!("[GraphQL query] user");
span.in_scope(|| {
debug!(?user_id);
});
let user_id = urlencoding::decode(&user_id).context("Invalid user parameter")?;
let user_id = UserId::new(&user_id);
let handler = context
.get_readable_handler(&user_id)
.ok_or_else(field_error_callback(
&span,
"Unauthorized access to user data",
))?;
let schema = Arc::new(self.get_schema(context, span.clone()).await?);
let user = handler.get_user_details(&user_id).instrument(span).await?;
User::<Handler>::from_user(user, schema)
}
async fn users(
context: &Context<Handler>,
#[graphql(name = "where")] filters: Option<RequestFilter>,
) -> FieldResult<Vec<User<Handler>>> {
let span = debug_span!("[GraphQL query] users");
span.in_scope(|| {
debug!(?filters);
});
let handler = context
.get_readonly_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized access to user list",
))?;
let schema = Arc::new(self.get_schema(context, span.clone()).await?);
let users = handler
.list_users(
filters
.map(|f| f.try_into_domain_filter(&schema))
.transpose()?,
false,
)
.instrument(span)
.await?;
users
.into_iter()
.map(|u| User::<Handler>::from_user_and_groups(u, schema.clone()))
.collect()
}
async fn groups(context: &Context<Handler>) -> FieldResult<Vec<Group<Handler>>> {
let span = debug_span!("[GraphQL query] groups");
let handler = context
.get_readonly_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized access to group list",
))?;
let schema = Arc::new(self.get_schema(context, span.clone()).await?);
let domain_groups = handler.list_groups(None).instrument(span).await?;
domain_groups
.into_iter()
.map(|g| Group::<Handler>::from_group(g, schema.clone()))
.collect()
}
async fn group(context: &Context<Handler>, group_id: i32) -> FieldResult<Group<Handler>> {
let span = debug_span!("[GraphQL query] group");
span.in_scope(|| {
debug!(?group_id);
});
let handler = context
.get_readonly_handler()
.ok_or_else(field_error_callback(
&span,
"Unauthorized access to group data",
))?;
let schema = Arc::new(self.get_schema(context, span.clone()).await?);
let group_details = handler
.get_group_details(GroupId(group_id))
.instrument(span)
.await?;
Group::<Handler>::from_group_details(group_details, schema.clone())
}
async fn schema(context: &Context<Handler>) -> FieldResult<Schema<Handler>> {
let span = debug_span!("[GraphQL query] get_schema");
self.get_schema(context, span).await.map(Into::into)
}
}
impl<Handler: BackendHandler> Query<Handler> {
async fn get_schema(
&self,
context: &Context<Handler>,
span: Span,
) -> FieldResult<PublicSchema> {
let handler = context
.handler
.get_user_restricted_lister_handler(&context.validation_result);
Ok(handler
.get_schema()
.instrument(span)
.await
.map(Into::<PublicSchema>::into)?)
}
}
#[cfg(test)]
mod tests {
use super::*;
use chrono::TimeZone;
use juniper::{
DefaultScalarValue, EmptyMutation, EmptySubscription, GraphQLType, RootNode, Variables,
execute, graphql_value,
};
use lldap_auth::access_control::{Permission, ValidationResults};
use lldap_domain::schema::AttributeSchema as DomainAttributeSchema;
use lldap_domain::types::{Attribute as DomainAttribute, GroupDetails, User as DomainUser};
use lldap_domain::{
schema::{AttributeList, Schema},
types::{AttributeName, AttributeType, LdapObjectClass},
};
use lldap_domain_model::model::UserColumn;
use lldap_test_utils::{MockTestBackendHandler, setup_default_schema};
use mockall::predicate::eq;
use pretty_assertions::assert_eq;
use std::collections::HashSet;
fn schema<'q, C, Q>(query_root: Q) -> RootNode<'q, Q, EmptyMutation<C>, EmptySubscription<C>>
where
Q: GraphQLType<DefaultScalarValue, Context = C, TypeInfo = ()> + 'q,
{
RootNode::new(
query_root,
EmptyMutation::<C>::new(),
EmptySubscription::<C>::new(),
)
}
#[tokio::test]
async fn get_user_by_id() {
const QUERY: &str = r#"{
user(userId: "bob") {
id
email
creationDate
firstName
lastName
uuid
attributes {
name
value
}
groups {
id
displayName
creationDate
uuid
attributes {
name
value
}
}
}
}"#;
let mut mock = MockTestBackendHandler::new();
mock.expect_get_schema().returning(|| {
Ok(Schema {
user_attributes: AttributeList {
attributes: vec![
DomainAttributeSchema {
name: "first_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
is_readonly: false,
},
DomainAttributeSchema {
name: "last_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: true,
is_readonly: false,
},
],
},
group_attributes: AttributeList {
attributes: vec![DomainAttributeSchema {
name: "club_name".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: false,
is_readonly: false,
}],
},
extra_user_object_classes: vec![
LdapObjectClass::from("customUserClass"),
LdapObjectClass::from("myUserClass"),
],
extra_group_object_classes: vec![LdapObjectClass::from("customGroupClass")],
})
});
mock.expect_get_user_details()
.with(eq(UserId::new("bob")))
.return_once(|_| {
Ok(DomainUser {
user_id: UserId::new("bob"),
email: "bob@bobbers.on".into(),
display_name: None,
creation_date: chrono::Utc.timestamp_millis_opt(42).unwrap().naive_utc(),
modified_date: chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
password_modified_date: chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
uuid: lldap_domain::types::Uuid::from_name_and_date(
"bob",
&chrono::Utc.timestamp_millis_opt(42).unwrap().naive_utc(),
),
attributes: vec![
DomainAttribute {
name: "first_name".into(),
value: "Bob".to_string().into(),
},
DomainAttribute {
name: "last_name".into(),
value: "Bobberson".to_string().into(),
},
],
})
});
let mut groups = HashSet::new();
groups.insert(GroupDetails {
group_id: GroupId(3),
display_name: "Bobbersons".into(),
creation_date: chrono::Utc.timestamp_nanos(42).naive_utc(),
uuid: lldap_domain::types::Uuid::from_name_and_date(
"Bobbersons",
&chrono::Utc.timestamp_nanos(42).naive_utc(),
),
attributes: vec![DomainAttribute {
name: "club_name".into(),
value: "Gang of Four".to_string().into(),
}],
modified_date: chrono::Utc.timestamp_nanos(42).naive_utc(),
});
groups.insert(GroupDetails {
group_id: GroupId(7),
display_name: "Jefferees".into(),
creation_date: chrono::Utc.timestamp_nanos(12).naive_utc(),
uuid: lldap_domain::types::Uuid::from_name_and_date(
"Jefferees",
&chrono::Utc.timestamp_nanos(12).naive_utc(),
),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_nanos(12).naive_utc(),
});
mock.expect_get_user_groups()
.with(eq(UserId::new("bob")))
.return_once(|_| Ok(groups));
let context = Context::<MockTestBackendHandler>::new_for_tests(
mock,
ValidationResults {
user: UserId::new("admin"),
permission: Permission::Admin,
},
);
let schema = schema(Query::<MockTestBackendHandler>::new());
let result = execute(QUERY, None, &schema, &Variables::new(), &context).await;
assert!(result.is_ok(), "Query failed: {:?}", result);
}
#[tokio::test]
async fn list_users() {
const QUERY: &str = r#"{
users(filters: {
any: [
{eq: {
field: "id"
value: "bob"
}},
{eq: {
field: "email"
value: "robert@bobbers.on"
}},
{eq: {
field: "firstName"
value: "robert"
}}
]}) {
id
email
}
}"#;
let mut mock = MockTestBackendHandler::new();
setup_default_schema(&mut mock);
mock.expect_list_users()
.with(
eq(Some(lldap_domain_handlers::handler::UserRequestFilter::Or(
vec![
lldap_domain_handlers::handler::UserRequestFilter::UserId(UserId::new(
"bob",
)),
lldap_domain_handlers::handler::UserRequestFilter::Equality(
UserColumn::Email,
"robert@bobbers.on".to_owned(),
),
lldap_domain_handlers::handler::UserRequestFilter::AttributeEquality(
AttributeName::from("first_name"),
"robert".to_string().into(),
),
],
))),
eq(false),
)
.return_once(|_, _| {
Ok(vec![
lldap_domain::types::UserAndGroups {
user: DomainUser {
user_id: UserId::new("bob"),
email: "bob@bobbers.on".into(),
display_name: None,
creation_date: chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
modified_date: chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
password_modified_date: chrono::Utc
.timestamp_opt(0, 0)
.unwrap()
.naive_utc(),
uuid: lldap_domain::types::Uuid::from_name_and_date(
"bob",
&chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
),
attributes: Vec::new(),
},
groups: None,
},
lldap_domain::types::UserAndGroups {
user: DomainUser {
user_id: UserId::new("robert"),
email: "robert@bobbers.on".into(),
display_name: None,
creation_date: chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
modified_date: chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
password_modified_date: chrono::Utc
.timestamp_opt(0, 0)
.unwrap()
.naive_utc(),
uuid: lldap_domain::types::Uuid::from_name_and_date(
"robert",
&chrono::Utc.timestamp_opt(0, 0).unwrap().naive_utc(),
),
attributes: Vec::new(),
},
groups: None,
},
])
});
let context = Context::<MockTestBackendHandler>::new_for_tests(
mock,
ValidationResults {
user: UserId::new("admin"),
permission: Permission::Admin,
},
);
let schema = schema(Query::<MockTestBackendHandler>::new());
assert_eq!(
execute(QUERY, None, &schema, &Variables::new(), &context).await,
Ok((
graphql_value!(
{
"users": [
{
"id": "bob",
"email": "bob@bobbers.on"
},
{
"id": "robert",
"email": "robert@bobbers.on"
},
]
}),
vec![]
))
);
}
#[tokio::test]
async fn get_schema() {
const QUERY: &str = r#"{
schema {
userSchema {
attributes {
name
attributeType
isList
isVisible
isEditable
isHardcoded
}
extraLdapObjectClasses
}
groupSchema {
attributes {
name
attributeType
isList
isVisible
isEditable
isHardcoded
}
extraLdapObjectClasses
}
}
}"#;
let mut mock = MockTestBackendHandler::new();
setup_default_schema(&mut mock);
let context = Context::<MockTestBackendHandler>::new_for_tests(
mock,
ValidationResults {
user: UserId::new("admin"),
permission: Permission::Admin,
},
);
let schema = schema(Query::<MockTestBackendHandler>::new());
let result = execute(QUERY, None, &schema, &Variables::new(), &context).await;
assert!(result.is_ok(), "Query failed: {:?}", result);
}
#[tokio::test]
async fn regular_user_doesnt_see_non_visible_attributes() {
const QUERY: &str = r#"{
schema {
userSchema {
attributes {
name
}
extraLdapObjectClasses
}
}
}"#;
let mut mock = MockTestBackendHandler::new();
mock.expect_get_schema().times(1).return_once(|| {
Ok(Schema {
user_attributes: AttributeList {
attributes: vec![DomainAttributeSchema {
name: "invisible".into(),
attribute_type: AttributeType::JpegPhoto,
is_list: false,
is_visible: false,
is_editable: true,
is_hardcoded: true,
is_readonly: false,
}],
},
group_attributes: AttributeList {
attributes: Vec::new(),
},
extra_user_object_classes: vec![LdapObjectClass::from("customUserClass")],
extra_group_object_classes: Vec::new(),
})
});
let context = Context::<MockTestBackendHandler>::new_for_tests(
mock,
ValidationResults {
user: UserId::new("bob"),
permission: Permission::Regular,
},
);
let schema = schema(Query::<MockTestBackendHandler>::new());
let result = execute(QUERY, None, &schema, &Variables::new(), &context).await;
assert!(result.is_ok(), "Query failed: {:?}", result);
}
}
+117
View File
@@ -0,0 +1,117 @@
use juniper::graphql_object;
use lldap_domain::public_schema::PublicSchema;
use lldap_domain::schema::AttributeList as DomainAttributeList;
use lldap_domain::types::LdapObjectClass;
use lldap_domain_handlers::handler::BackendHandler;
use lldap_ldap::{get_default_group_object_classes, get_default_user_object_classes};
use serde::{Deserialize, Serialize};
use super::attribute::AttributeSchema;
use crate::api::Context;
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct AttributeList<Handler: BackendHandler> {
attributes: DomainAttributeList,
default_classes: Vec<LdapObjectClass>,
extra_classes: Vec<LdapObjectClass>,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
#[derive(Clone)]
pub struct ObjectClassInfo {
object_class: String,
is_hardcoded: bool,
}
#[graphql_object]
impl ObjectClassInfo {
fn object_class(&self) -> &str {
&self.object_class
}
fn is_hardcoded(&self) -> bool {
self.is_hardcoded
}
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> AttributeList<Handler> {
fn attributes(&self) -> Vec<AttributeSchema<Handler>> {
self.attributes
.attributes
.clone()
.into_iter()
.map(Into::into)
.collect()
}
fn extra_ldap_object_classes(&self) -> Vec<String> {
self.extra_classes.iter().map(|c| c.to_string()).collect()
}
fn ldap_object_classes(&self) -> Vec<ObjectClassInfo> {
let mut all_object_classes: Vec<ObjectClassInfo> = self
.default_classes
.iter()
.map(|c| ObjectClassInfo {
object_class: c.to_string(),
is_hardcoded: true,
})
.collect();
all_object_classes.extend(self.extra_classes.iter().map(|c| ObjectClassInfo {
object_class: c.to_string(),
is_hardcoded: false,
}));
all_object_classes
}
}
impl<Handler: BackendHandler> AttributeList<Handler> {
pub fn new(
attributes: DomainAttributeList,
default_classes: Vec<LdapObjectClass>,
extra_classes: Vec<LdapObjectClass>,
) -> Self {
Self {
attributes,
default_classes,
extra_classes,
_phantom: std::marker::PhantomData,
}
}
}
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
pub struct Schema<Handler: BackendHandler> {
schema: PublicSchema,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> Schema<Handler> {
fn user_schema(&self) -> AttributeList<Handler> {
AttributeList::<Handler>::new(
self.schema.get_schema().user_attributes.clone(),
get_default_user_object_classes(),
self.schema.get_schema().extra_user_object_classes.clone(),
)
}
fn group_schema(&self) -> AttributeList<Handler> {
AttributeList::<Handler>::new(
self.schema.get_schema().group_attributes.clone(),
get_default_group_object_classes(),
self.schema.get_schema().extra_group_object_classes.clone(),
)
}
}
impl<Handler: BackendHandler> From<PublicSchema> for Schema<Handler> {
fn from(value: PublicSchema) -> Self {
Self {
schema: value,
_phantom: std::marker::PhantomData,
}
}
}
+136
View File
@@ -0,0 +1,136 @@
use chrono::TimeZone;
use juniper::{FieldResult, graphql_object};
use lldap_access_control::UserReadableBackendHandler;
use lldap_domain::public_schema::PublicSchema;
use lldap_domain::types::{User as DomainUser, UserAndGroups as DomainUserAndGroups};
use lldap_domain_handlers::handler::BackendHandler;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use tracing::{Instrument, debug, debug_span};
use super::attribute::AttributeValue;
use super::group::Group;
use crate::api::Context;
#[derive(PartialEq, Eq, Debug, Serialize, Deserialize)]
/// Represents a single user.
pub struct User<Handler: BackendHandler> {
user: DomainUser,
attributes: Vec<AttributeValue<Handler>>,
schema: Arc<PublicSchema>,
groups: Option<Vec<Group<Handler>>>,
_phantom: std::marker::PhantomData<Box<Handler>>,
}
impl<Handler: BackendHandler> User<Handler> {
pub fn from_user(mut user: DomainUser, schema: Arc<PublicSchema>) -> FieldResult<Self> {
let attributes = AttributeValue::<Handler>::user_attributes_from_schema(&mut user, &schema);
Ok(Self {
user,
attributes,
schema,
groups: None,
_phantom: std::marker::PhantomData,
})
}
}
impl<Handler: BackendHandler> User<Handler> {
pub fn from_user_and_groups(
DomainUserAndGroups { user, groups }: DomainUserAndGroups,
schema: Arc<PublicSchema>,
) -> FieldResult<Self> {
let mut user = Self::from_user(user, schema.clone())?;
if let Some(groups) = groups {
user.groups = Some(
groups
.into_iter()
.map(|g| Group::<Handler>::from_group_details(g, schema.clone()))
.collect::<FieldResult<Vec<_>>>()?,
);
}
Ok(user)
}
}
#[graphql_object(context = Context<Handler>)]
impl<Handler: BackendHandler> User<Handler> {
fn id(&self) -> &str {
self.user.user_id.as_str()
}
fn email(&self) -> &str {
self.user.email.as_str()
}
fn display_name(&self) -> &str {
self.user.display_name.as_deref().unwrap_or("")
}
fn first_name(&self) -> &str {
self.attributes
.iter()
.find(|a| a.name() == "first_name")
.map(|a| a.attribute.value.as_str().unwrap_or_default())
.unwrap_or_default()
}
fn last_name(&self) -> &str {
self.attributes
.iter()
.find(|a| a.name() == "last_name")
.map(|a| a.attribute.value.as_str().unwrap_or_default())
.unwrap_or_default()
}
fn avatar(&self) -> Option<String> {
self.attributes
.iter()
.find(|a| a.name() == "avatar")
.map(|a| {
String::from(
a.attribute
.value
.as_jpeg_photo()
.expect("Invalid JPEG returned by the DB"),
)
})
}
fn creation_date(&self) -> chrono::DateTime<chrono::Utc> {
chrono::Utc.from_utc_datetime(&self.user.creation_date)
}
fn uuid(&self) -> &str {
self.user.uuid.as_str()
}
/// User-defined attributes.
fn attributes(&self) -> &[AttributeValue<Handler>] {
&self.attributes
}
/// The groups to which this user belongs.
async fn groups(&self, context: &Context<Handler>) -> FieldResult<Vec<Group<Handler>>> {
if let Some(groups) = &self.groups {
return Ok(groups.clone());
}
let span = debug_span!("[GraphQL query] user::groups");
span.in_scope(|| {
debug!(user_id = ?self.user.user_id);
});
let handler = context
.get_readable_handler(&self.user.user_id)
.expect("We shouldn't be able to get there without readable permission");
let domain_groups = handler
.get_user_groups(&self.user.user_id)
.instrument(span)
.await?;
let mut groups = domain_groups
.into_iter()
.map(|g| Group::<Handler>::from_group_details(g, self.schema.clone()))
.collect::<FieldResult<Vec<Group<Handler>>>>()?;
groups.sort_by(|g1, g2| g1.display_name.cmp(&g2.display_name));
Ok(groups)
}
}
+2 -1
View File
@@ -7,6 +7,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = "*"
@@ -63,4 +64,4 @@ version = "1.25"
[dev-dependencies.lldap_domain]
path = "../domain"
features = ["test"]
features = ["test"]
+352 -16
View File
@@ -184,11 +184,11 @@ fn get_group_attribute_equality_filter(
]),
(Ok(_), Err(e)) => {
warn!("Invalid value for attribute {} (lowercased): {}", field, e);
GroupRequestFilter::from(false)
GroupRequestFilter::False
}
(Err(e), _) => {
warn!("Invalid value for attribute {}: {}", field, e);
GroupRequestFilter::from(false)
GroupRequestFilter::False
}
}
}
@@ -209,7 +209,7 @@ fn convert_group_filter(
.map(|id| GroupRequestFilter::GroupId(GroupId(id)))
.unwrap_or_else(|_| {
warn!("Given group id is not a valid integer: {}", value_lc);
GroupRequestFilter::from(false)
GroupRequestFilter::False
})),
GroupFieldType::DisplayName => Ok(GroupRequestFilter::DisplayName(value_lc.into())),
GroupFieldType::Uuid => Uuid::try_from(value_lc.as_str())
@@ -226,7 +226,7 @@ fn convert_group_filter(
.map(GroupRequestFilter::Member)
.unwrap_or_else(|e| {
warn!("Invalid member filter on group: {}", e);
GroupRequestFilter::from(false)
GroupRequestFilter::False
})),
GroupFieldType::ObjectClass => Ok(GroupRequestFilter::from(
get_default_group_object_classes()
@@ -246,7 +246,7 @@ fn convert_group_filter(
.map(GroupRequestFilter::DisplayName)
.unwrap_or_else(|_| {
warn!("Invalid dn filter on group: {}", value_lc);
GroupRequestFilter::from(false)
GroupRequestFilter::False
}))
}
GroupFieldType::NoMatch => {
@@ -257,7 +257,7 @@ fn convert_group_filter(
field
);
}
Ok(GroupRequestFilter::from(false))
Ok(GroupRequestFilter::False)
}
GroupFieldType::Attribute(field, typ, is_list) => Ok(
get_group_attribute_equality_filter(&field, typ, is_list, value),
@@ -272,21 +272,55 @@ fn convert_group_filter(
}),
}
}
LdapFilter::And(filters) => Ok(GroupRequestFilter::And(
filters.iter().map(rec).collect::<LdapResult<_>>()?,
)),
LdapFilter::Or(filters) => Ok(GroupRequestFilter::Or(
filters.iter().map(rec).collect::<LdapResult<_>>()?,
)),
LdapFilter::Not(filter) => Ok(GroupRequestFilter::Not(Box::new(rec(filter)?))),
LdapFilter::And(filters) => {
let res = filters
.iter()
.map(rec)
.filter(|f| !matches!(f, Ok(GroupRequestFilter::True)))
.flat_map(|f| match f {
Ok(GroupRequestFilter::And(v)) => v.into_iter().map(Ok).collect(),
f => vec![f],
})
.collect::<LdapResult<Vec<_>>>()?;
if res.is_empty() {
Ok(GroupRequestFilter::True)
} else if res.len() == 1 {
Ok(res.into_iter().next().unwrap())
} else {
Ok(GroupRequestFilter::And(res))
}
}
LdapFilter::Or(filters) => {
let res = filters
.iter()
.map(rec)
.filter(|c| !matches!(c, Ok(GroupRequestFilter::False)))
.flat_map(|f| match f {
Ok(GroupRequestFilter::Or(v)) => v.into_iter().map(Ok).collect(),
f => vec![f],
})
.collect::<LdapResult<Vec<_>>>()?;
if res.is_empty() {
Ok(GroupRequestFilter::False)
} else if res.len() == 1 {
Ok(res.into_iter().next().unwrap())
} else {
Ok(GroupRequestFilter::Or(res))
}
}
LdapFilter::Not(filter) => Ok(match rec(filter)? {
GroupRequestFilter::True => GroupRequestFilter::False,
GroupRequestFilter::False => GroupRequestFilter::True,
f => GroupRequestFilter::Not(Box::new(f)),
}),
LdapFilter::Present(field) => {
let field = AttributeName::from(field.as_str());
Ok(match map_group_field(&field, schema) {
GroupFieldType::Attribute(name, _, _) => {
GroupRequestFilter::CustomAttributePresent(name)
}
GroupFieldType::NoMatch => GroupRequestFilter::from(false),
_ => GroupRequestFilter::from(true),
GroupFieldType::NoMatch => GroupRequestFilter::False,
_ => GroupRequestFilter::True,
})
}
LdapFilter::Substring(field, substring_filter) => {
@@ -295,7 +329,7 @@ fn convert_group_filter(
GroupFieldType::DisplayName => Ok(GroupRequestFilter::DisplayNameSubString(
substring_filter.clone().into(),
)),
GroupFieldType::NoMatch => Ok(GroupRequestFilter::from(false)),
GroupFieldType::NoMatch => Ok(GroupRequestFilter::False),
_ => Err(LdapError {
code: LdapResultCode::UnwillingToPerform,
message: format!(
@@ -354,3 +388,305 @@ pub fn convert_groups_to_ldap_op<'a>(
))
})
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
handler::tests::{make_group_search_request, setup_bound_admin_handler},
search::{make_search_request, make_search_success},
};
use ldap3_proto::proto::LdapSubstringFilter;
use lldap_domain::{
types::{GroupId, UserId},
uuid,
};
use lldap_domain_handlers::handler::*;
use lldap_test_utils::MockTestBackendHandler;
use mockall::predicate::eq;
use pretty_assertions::assert_eq;
#[tokio::test]
async fn test_search_groups() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::True)))
.times(1)
.return_once(|_| {
Ok(vec![
Group {
id: GroupId(1),
display_name: "group_1".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob"), UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
},
Group {
id: GroupId(3),
display_name: "BestGroup".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
},
])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::And(vec![]),
vec![
"objectClass",
"dn",
"cn",
"uniqueMember",
"entryUuid",
"entryDN",
],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"group_1".to_vec()]
},
LdapPartialAttribute {
atype: "entryDN".to_string(),
vals: vec![b"uid=group_1,ou=groups,dc=example,dc=com".to_vec()],
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()],
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![b"groupOfUniqueNames".to_vec(), b"groupOfNames".to_vec()]
},
LdapPartialAttribute {
atype: "uniqueMember".to_string(),
vals: vec![
b"uid=bob,ou=people,dc=example,dc=com".to_vec(),
b"uid=john,ou=people,dc=example,dc=com".to_vec(),
],
},
],
}),
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=BestGroup,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"BestGroup".to_vec()]
},
LdapPartialAttribute {
atype: "entryDN".to_string(),
vals: vec![b"uid=BestGroup,ou=groups,dc=example,dc=com".to_vec()],
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()],
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![b"groupOfUniqueNames".to_vec(), b"groupOfNames".to_vec()]
},
LdapPartialAttribute {
atype: "uniqueMember".to_string(),
vals: vec![b"uid=john,ou=people,dc=example,dc=com".to_vec()],
},
],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_by_groupid() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::GroupId(GroupId(1)))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::Equality("groupid".to_string(), "1".to_string()),
vec!["dn"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_filter() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(vec![
GroupRequestFilter::DisplayName("group_1".into()),
GroupRequestFilter::Member(UserId::new("bob")),
GroupRequestFilter::DisplayName("rockstars".into()),
false.into(),
GroupRequestFilter::Uuid(uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc")),
false.into(),
GroupRequestFilter::DisplayNameSubString(SubStringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
}),
]))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::And(vec![
LdapFilter::Equality("cN".to_string(), "Group_1".to_string()),
LdapFilter::Equality(
"uniqueMember".to_string(),
"uid=bob,ou=peopLe,Dc=eXample,dc=com".to_string(),
),
LdapFilter::Equality(
"dn".to_string(),
"uid=rockstars,ou=groups,dc=example,dc=com".to_string(),
),
LdapFilter::Equality(
"dn".to_string(),
"uid=rockstars,ou=people,dc=example,dc=com".to_string(),
),
LdapFilter::Equality(
"uuid".to_string(),
"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_string(),
),
LdapFilter::Equality("obJEctclass".to_string(), "groupofUniqueNames".to_string()),
LdapFilter::Equality("objectclass".to_string(), "groupOfNames".to_string()),
LdapFilter::Present("objectclass".to_string()),
LdapFilter::Present("dn".to_string()),
LdapFilter::Not(Box::new(LdapFilter::Present(
"random_attribUte".to_string(),
))),
LdapFilter::Equality("unknown_attribute".to_string(), "randomValue".to_string()),
LdapFilter::Substring(
"cn".to_owned(),
LdapSubstringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
},
),
]),
vec!["1.1"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_filter_2() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::DisplayName("group_1".into()),
GroupRequestFilter::Member(UserId::new("bob")),
]))))
.times(1)
.return_once(|_| Ok(vec![]));
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::Or(vec![
LdapFilter::Equality("cn".to_string(), "group_1".to_string()),
LdapFilter::Equality(
"member".to_string(),
"uid=bob,ou=people,dc=example,dc=com".to_string(),
),
]),
vec!["cn"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()])
);
}
#[tokio::test]
async fn test_search_groups_filter_3() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Not(Box::new(
GroupRequestFilter::DisplayName("group_1".into()),
)))))
.times(1)
.return_once(|_| Ok(vec![]));
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::Not(Box::new(LdapFilter::Equality(
"cn".to_string(),
"group_1".to_string(),
))),
vec!["cn"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()])
);
}
#[tokio::test]
async fn test_search_group_as_scope() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::DisplayName("group_1".into()))))
.times(1)
.return_once(|_| Ok(vec![]));
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_search_request(
"cn=group_1,ou=groups,dc=example,dc=com",
LdapFilter::And(vec![]),
vec!["objectClass"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
}
+431 -35
View File
@@ -3,10 +3,10 @@ use crate::core::{
utils::{
ExpandedAttributes, LdapInfo, UserFieldType, expand_attribute_wildcards,
get_custom_attribute, get_group_id_from_distinguished_name_or_plain_name,
get_user_id_from_distinguished_name_or_plain_name, map_user_field,
get_user_id_from_distinguished_name_or_plain_name, map_user_field, to_generalized_time,
},
};
use chrono::TimeZone;
use ldap3_proto::{
LdapFilter, LdapPartialAttribute, LdapResultCode, LdapSearchResultEntry, proto::LdapOp,
};
@@ -87,24 +87,15 @@ pub fn get_user_attribute(
UserFieldType::PrimaryField(UserColumn::DisplayName) => {
vec![user.display_name.clone()?.into_bytes()]
}
UserFieldType::PrimaryField(UserColumn::CreationDate) => vec![
chrono::Utc
.from_utc_datetime(&user.creation_date)
.to_rfc3339()
.into_bytes(),
],
UserFieldType::PrimaryField(UserColumn::ModifiedDate) => vec![
chrono::Utc
.from_utc_datetime(&user.modified_date)
.to_rfc3339()
.into_bytes(),
],
UserFieldType::PrimaryField(UserColumn::PasswordModifiedDate) => vec![
chrono::Utc
.from_utc_datetime(&user.password_modified_date)
.to_rfc3339()
.into_bytes(),
],
UserFieldType::PrimaryField(UserColumn::CreationDate) => {
vec![to_generalized_time(&user.creation_date)]
}
UserFieldType::PrimaryField(UserColumn::ModifiedDate) => {
vec![to_generalized_time(&user.modified_date)]
}
UserFieldType::PrimaryField(UserColumn::PasswordModifiedDate) => {
vec![to_generalized_time(&user.password_modified_date)]
}
UserFieldType::Attribute(attr, _, _) => get_custom_attribute(&user.attributes, &attr)?,
UserFieldType::NoMatch => match attribute.as_str() {
"1.1" => return None,
@@ -202,11 +193,11 @@ fn get_user_attribute_equality_filter(
]),
(Ok(_), Err(e)) => {
warn!("Invalid value for attribute {} (lowercased): {}", field, e);
UserRequestFilter::from(false)
UserRequestFilter::False
}
(Err(e), _) => {
warn!("Invalid value for attribute {}: {}", field, e);
UserRequestFilter::from(false)
UserRequestFilter::False
}
}
}
@@ -218,13 +209,47 @@ fn convert_user_filter(
) -> LdapResult<UserRequestFilter> {
let rec = |f| convert_user_filter(ldap_info, f, schema);
match filter {
LdapFilter::And(filters) => Ok(UserRequestFilter::And(
filters.iter().map(rec).collect::<LdapResult<_>>()?,
)),
LdapFilter::Or(filters) => Ok(UserRequestFilter::Or(
filters.iter().map(rec).collect::<LdapResult<_>>()?,
)),
LdapFilter::Not(filter) => Ok(UserRequestFilter::Not(Box::new(rec(filter)?))),
LdapFilter::And(filters) => {
let res = filters
.iter()
.map(rec)
.filter(|c| !matches!(c, Ok(UserRequestFilter::True)))
.flat_map(|f| match f {
Ok(UserRequestFilter::And(v)) => v.into_iter().map(Ok).collect(),
f => vec![f],
})
.collect::<LdapResult<Vec<_>>>()?;
if res.is_empty() {
Ok(UserRequestFilter::True)
} else if res.len() == 1 {
Ok(res.into_iter().next().unwrap())
} else {
Ok(UserRequestFilter::And(res))
}
}
LdapFilter::Or(filters) => {
let res = filters
.iter()
.map(rec)
.filter(|c| !matches!(c, Ok(UserRequestFilter::False)))
.flat_map(|f| match f {
Ok(UserRequestFilter::Or(v)) => v.into_iter().map(Ok).collect(),
f => vec![f],
})
.collect::<LdapResult<Vec<_>>>()?;
if res.is_empty() {
Ok(UserRequestFilter::False)
} else if res.len() == 1 {
Ok(res.into_iter().next().unwrap())
} else {
Ok(UserRequestFilter::Or(res))
}
}
LdapFilter::Not(filter) => Ok(match rec(filter)? {
UserRequestFilter::True => UserRequestFilter::False,
UserRequestFilter::False => UserRequestFilter::True,
f => UserRequestFilter::Not(Box::new(f)),
}),
LdapFilter::Equality(field, value) => {
let field = AttributeName::from(field.as_str());
let value_lc = value.to_ascii_lowercase();
@@ -250,7 +275,7 @@ fn convert_user_filter(
field
);
}
Ok(UserRequestFilter::from(false))
Ok(UserRequestFilter::False)
}
UserFieldType::ObjectClass => Ok(UserRequestFilter::from(
get_default_user_object_classes()
@@ -269,7 +294,7 @@ fn convert_user_filter(
.map(UserRequestFilter::MemberOf)
.unwrap_or_else(|e| {
warn!("Invalid memberOf filter: {}", e);
UserRequestFilter::from(false)
UserRequestFilter::False
})),
UserFieldType::EntryDn | UserFieldType::Dn => {
Ok(get_user_id_from_distinguished_name_or_plain_name(
@@ -280,7 +305,7 @@ fn convert_user_filter(
.map(UserRequestFilter::UserId)
.unwrap_or_else(|_| {
warn!("Invalid dn filter on user: {}", value_lc);
UserRequestFilter::from(false)
UserRequestFilter::False
}))
}
}
@@ -291,8 +316,8 @@ fn convert_user_filter(
UserFieldType::Attribute(name, _, _) => {
UserRequestFilter::CustomAttributePresent(name)
}
UserFieldType::NoMatch => UserRequestFilter::from(false),
_ => UserRequestFilter::from(true),
UserFieldType::NoMatch => UserRequestFilter::False,
_ => UserRequestFilter::True,
})
}
LdapFilter::Substring(field, substring_filter) => {
@@ -311,7 +336,7 @@ fn convert_user_filter(
code: LdapResultCode::UnwillingToPerform,
message: format!("Unsupported user attribute for substring filter: {field:?}"),
}),
UserFieldType::NoMatch => Ok(UserRequestFilter::from(false)),
UserFieldType::NoMatch => Ok(UserRequestFilter::False),
UserFieldType::PrimaryField(UserColumn::Email) => Ok(UserRequestFilter::SubString(
UserColumn::LowercaseEmail,
substring_filter.clone().into(),
@@ -375,3 +400,374 @@ pub fn convert_users_to_ldap_op<'a>(
))
})
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
handler::tests::{
make_user_search_request, setup_bound_admin_handler, setup_bound_handler_with_group,
setup_bound_readonly_handler,
},
search::{make_search_request, make_search_success},
};
use chrono::{DateTime, Duration, NaiveDateTime, TimeZone, Utc};
use lldap_domain::types::{Attribute, GroupDetails, JpegPhoto};
use lldap_test_utils::MockTestBackendHandler;
use mockall::predicate::eq;
use pretty_assertions::assert_eq;
fn assert_timestamp_within_margin(
timestamp_bytes: &[u8],
base_timestamp_dt: DateTime<Utc>,
time_margin: Duration,
) {
let timestamp_str =
std::str::from_utf8(timestamp_bytes).expect("Invalid conversion from UTF-8 to string");
let timestamp_naive = NaiveDateTime::parse_from_str(timestamp_str, "%Y%m%d%H%M%SZ")
.expect("Invalid timestamp format");
let timestamp_dt: DateTime<Utc> = Utc.from_utc_datetime(&timestamp_naive);
let within_range = (base_timestamp_dt - timestamp_dt).abs() <= time_margin;
assert!(
within_range,
"Timestamp not within range: expected within [{} - {}], got [{}]",
base_timestamp_dt - time_margin,
base_timestamp_dt + time_margin,
timestamp_dt
);
}
#[tokio::test]
async fn test_search_regular_user() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::And(vec![
UserRequestFilter::True,
UserRequestFilter::UserId(UserId::new("test")),
]))),
eq(false),
)
.times(1)
.return_once(|_, _| {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("test"),
..Default::default()
},
groups: None,
}])
});
let ldap_handler = setup_bound_handler_with_group(mock, "regular").await;
let request =
make_user_search_request::<String>(LdapFilter::And(vec![]), vec!["1.1".to_string()]);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=test,ou=people,dc=example,dc=com".to_string(),
attributes: vec![],
}),
make_search_success()
]),
);
}
#[tokio::test]
async fn test_search_readonly_user() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(eq(Some(UserRequestFilter::True)), eq(false))
.times(1)
.return_once(|_, _| Ok(vec![]));
let ldap_handler = setup_bound_readonly_handler(mock).await;
let request = make_user_search_request(LdapFilter::And(vec![]), vec!["1.1"]);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
#[tokio::test]
async fn test_search_member_of() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(eq(Some(UserRequestFilter::True)), eq(true))
.times(1)
.return_once(|_, _| {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob"),
..Default::default()
},
groups: Some(vec![GroupDetails {
group_id: lldap_domain::types::GroupId(42),
display_name: "rockstars".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
uuid: lldap_domain::uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}]),
}])
});
let ldap_handler = setup_bound_readonly_handler(mock).await;
let request = make_user_search_request::<String>(
LdapFilter::And(vec![]),
vec!["memberOf".to_string()],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=bob,ou=people,dc=example,dc=com".to_string(),
attributes: vec![LdapPartialAttribute {
atype: "memberOf".to_string(),
vals: vec![b"cn=rockstars,ou=groups,dc=example,dc=com".to_vec()]
}],
}),
make_search_success(),
]),
);
}
#[tokio::test]
async fn test_search_user_as_scope() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::UserId(UserId::new("bob")))),
eq(false),
)
.times(1)
.return_once(|_, _| Ok(vec![]));
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_search_request(
"uid=bob,ou=people,dc=example,dc=com",
LdapFilter::And(vec![]),
vec!["objectClass"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
#[tokio::test]
async fn test_search_users() {
use chrono::prelude::*;
use lldap_domain::uuid;
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users().times(1).return_once(|_, _| {
Ok(vec![
UserAndGroups {
user: User {
user_id: UserId::new("bob_1"),
email: "bob@bobmail.bob".into(),
display_name: Some("Bôb Böbberson".to_string()),
uuid: uuid!("698e1d5f-7a40-3151-8745-b9b8a37839da"),
attributes: vec![
Attribute {
name: "first_name".into(),
value: "Bôb".to_string().into(),
},
Attribute {
name: "last_name".into(),
value: "Böbberson".to_string().into(),
},
],
..Default::default()
},
groups: None,
},
UserAndGroups {
user: User {
user_id: UserId::new("jim"),
email: "jim@cricket.jim".into(),
display_name: Some("Jimminy Cricket".to_string()),
attributes: vec![
Attribute {
name: "avatar".into(),
value: JpegPhoto::for_tests().into(),
},
Attribute {
name: "first_name".into(),
value: "Jim".to_string().into(),
},
Attribute {
name: "last_name".into(),
value: "Cricket".to_string().into(),
},
],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
creation_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
modified_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
password_modified_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
},
groups: None,
},
])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_user_search_request(
LdapFilter::And(vec![]),
vec![
"objectClass",
"dn",
"uid",
"mail",
"givenName",
"sn",
"cn",
"createTimestamp",
"entryUuid",
"jpegPhoto",
],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=bob_1,ou=people,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec!["Bôb Böbberson".to_string().into_bytes()]
},
LdapPartialAttribute {
atype: "createTimestamp".to_string(),
vals: vec![b"19700101000000Z".to_vec()]
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"698e1d5f-7a40-3151-8745-b9b8a37839da".to_vec()]
},
LdapPartialAttribute {
atype: "givenName".to_string(),
vals: vec!["Bôb".to_string().into_bytes()]
},
LdapPartialAttribute {
atype: "mail".to_string(),
vals: vec![b"bob@bobmail.bob".to_vec()]
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![
b"inetOrgPerson".to_vec(),
b"posixAccount".to_vec(),
b"mailAccount".to_vec(),
b"person".to_vec(),
b"customUserClass".to_vec(),
]
},
LdapPartialAttribute {
atype: "sn".to_string(),
vals: vec!["Böbberson".to_string().into_bytes()]
},
LdapPartialAttribute {
atype: "uid".to_string(),
vals: vec![b"bob_1".to_vec()]
},
],
}),
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=jim,ou=people,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"Jimminy Cricket".to_vec()]
},
LdapPartialAttribute {
atype: "createTimestamp".to_string(),
vals: vec![b"20140708091011Z".to_vec()]
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()]
},
LdapPartialAttribute {
atype: "givenName".to_string(),
vals: vec![b"Jim".to_vec()]
},
LdapPartialAttribute {
atype: "jpegPhoto".to_string(),
vals: vec![JpegPhoto::for_tests().into_bytes()]
},
LdapPartialAttribute {
atype: "mail".to_string(),
vals: vec![b"jim@cricket.jim".to_vec()]
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![
b"inetOrgPerson".to_vec(),
b"posixAccount".to_vec(),
b"mailAccount".to_vec(),
b"person".to_vec(),
b"customUserClass".to_vec(),
]
},
LdapPartialAttribute {
atype: "sn".to_string(),
vals: vec![b"Cricket".to_vec()]
},
LdapPartialAttribute {
atype: "uid".to_string(),
vals: vec![b"jim".to_vec()]
},
],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_pwd_changed_time_format() {
use lldap_domain::uuid;
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users().times(1).return_once(|_, _| {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob_1"),
email: "bob@bobmail.bob".into(),
uuid: uuid!("698e1d5f-7a40-3151-8745-b9b8a37839da"),
attributes: vec![],
password_modified_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
..Default::default()
},
groups: None,
}])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_user_search_request(LdapFilter::And(vec![]), vec!["pwdChangedTime"]);
if let LdapOp::SearchResultEntry(entry) =
&ldap_handler.do_search_or_dse(&request).await.unwrap()[0]
{
assert_eq!(entry.attributes.len(), 1);
assert_eq!(entry.attributes[0].atype, "pwdChangedTime");
assert_eq!(entry.attributes[0].vals.len(), 1);
assert_timestamp_within_margin(
&entry.attributes[0].vals[0],
Utc.with_ymd_and_hms(2014, 7, 8, 9, 10, 11).unwrap(),
Duration::seconds(1),
);
} else {
panic!("Expected SearchResultEntry");
}
}
}
+40 -9
View File
@@ -3,7 +3,7 @@ use crate::core::{
group::{REQUIRED_GROUP_ATTRIBUTES, get_default_group_object_classes},
user::{REQUIRED_USER_ATTRIBUTES, get_default_user_object_classes},
};
use chrono::TimeZone;
use chrono::{NaiveDateTime, TimeZone};
use itertools::join;
use ldap3_proto::LdapResultCode;
use lldap_domain::{
@@ -18,6 +18,16 @@ use lldap_domain_model::model::UserColumn;
use std::collections::BTreeMap;
use tracing::{debug, instrument, warn};
/// Convert a NaiveDateTime to LDAP GeneralizedTime format (YYYYMMDDHHMMSSZ)
/// This is the standard format required by LDAP for timestamp attributes like pwdChangedTime
pub fn to_generalized_time(dt: &NaiveDateTime) -> Vec<u8> {
chrono::Utc
.from_utc_datetime(dt)
.format("%Y%m%d%H%M%SZ")
.to_string()
.into_bytes()
}
fn make_dn_pair<I>(mut iter: I) -> LdapResult<(String, String)>
where
I: Iterator<Item = String>,
@@ -300,16 +310,27 @@ pub struct LdapInfo {
pub ignored_group_attributes: Vec<AttributeName>,
}
impl LdapInfo {
pub fn new(
base_dn: &str,
ignored_user_attributes: Vec<AttributeName>,
ignored_group_attributes: Vec<AttributeName>,
) -> LdapResult<Self> {
let base_dn = parse_distinguished_name(&base_dn.to_ascii_lowercase())?;
let base_dn_str = join(base_dn.iter().map(|(k, v)| format!("{k}={v}")), ",");
Ok(Self {
base_dn,
base_dn_str,
ignored_user_attributes,
ignored_group_attributes,
})
}
}
pub fn get_custom_attribute(
attributes: &[Attribute],
attribute_name: &AttributeName,
) -> Option<Vec<Vec<u8>>> {
let convert_date = |date| {
chrono::Utc
.from_utc_datetime(date)
.to_rfc3339()
.into_bytes()
};
attributes
.iter()
.find(|a| &a.name == attribute_name)
@@ -335,9 +356,9 @@ pub fn get_custom_attribute(
AttributeValue::JpegPhoto(Cardinality::Unbounded(l)) => {
l.iter().map(|p| p.clone().into_bytes()).collect()
}
AttributeValue::DateTime(Cardinality::Singleton(dt)) => vec![convert_date(dt)],
AttributeValue::DateTime(Cardinality::Singleton(dt)) => vec![to_generalized_time(dt)],
AttributeValue::DateTime(Cardinality::Unbounded(l)) => {
l.iter().map(convert_date).collect()
l.iter().map(to_generalized_time).collect()
}
})
}
@@ -521,4 +542,14 @@ mod tests {
parsed_dn
);
}
#[test]
fn test_whitespace_in_ldap_info() {
assert_eq!(
LdapInfo::new(" ou=people, dc =example, dc=com \n", vec![], vec![])
.unwrap()
.base_dn_str,
"ou=people,dc=example,dc=com"
);
}
}
+14 -24
View File
@@ -2,7 +2,7 @@ use crate::{
compare,
core::{
error::{LdapError, LdapResult},
utils::{LdapInfo, parse_distinguished_name},
utils::LdapInfo,
},
create, delete, modify,
password::{self, do_password_modification},
@@ -18,7 +18,7 @@ use ldap3_proto::proto::{
};
use lldap_access_control::AccessControlledBackendHandler;
use lldap_auth::access_control::ValidationResults;
use lldap_domain::{public_schema::PublicSchema, types::AttributeName};
use lldap_domain::public_schema::PublicSchema;
use lldap_domain_handlers::handler::{BackendHandler, LoginHandler, ReadSchemaBackendHandler};
use lldap_opaque_handler::OpaqueHandler;
use tracing::{debug, instrument};
@@ -59,7 +59,7 @@ pub(crate) fn make_modify_response(code: LdapResultCode, message: String) -> Lda
pub struct LdapHandler<Backend> {
user_info: Option<ValidationResults>,
backend_handler: AccessControlledBackendHandler<Backend>,
ldap_info: LdapInfo,
ldap_info: &'static LdapInfo,
session_uuid: uuid::Uuid,
}
@@ -89,23 +89,13 @@ enum Credentials<'s> {
impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend> {
pub fn new(
backend_handler: AccessControlledBackendHandler<Backend>,
mut ldap_base_dn: String,
ignored_user_attributes: Vec<AttributeName>,
ignored_group_attributes: Vec<AttributeName>,
ldap_info: &'static LdapInfo,
session_uuid: uuid::Uuid,
) -> Self {
ldap_base_dn.make_ascii_lowercase();
Self {
user_info: None,
backend_handler,
ldap_info: LdapInfo {
base_dn: parse_distinguished_name(&ldap_base_dn).unwrap_or_else(|_| {
panic!("Invalid value for ldap_base_dn in configuration: {ldap_base_dn}")
}),
base_dn_str: ldap_base_dn,
ignored_user_attributes,
ignored_group_attributes,
},
ldap_info,
session_uuid,
}
}
@@ -114,9 +104,9 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
pub fn new_for_tests(backend_handler: Backend, ldap_base_dn: &str) -> Self {
Self::new(
AccessControlledBackendHandler::new(backend_handler),
ldap_base_dn.to_string(),
vec![],
vec![],
Box::leak(Box::new(
LdapInfo::new(ldap_base_dn, Vec::new(), Vec::new()).unwrap(),
)),
uuid::Uuid::parse_str("550e8400-e29b-41d4-a716-446655440000").unwrap(),
)
}
@@ -171,13 +161,13 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
let backend_handler = self
.backend_handler
.get_user_restricted_lister_handler(user_info);
search::do_search(&backend_handler, &self.ldap_info, request).await
search::do_search(&backend_handler, self.ldap_info, request).await
}
#[instrument(skip_all, level = "debug", fields(dn = %request.dn))]
pub async fn do_bind(&mut self, request: &LdapBindRequest) -> Vec<LdapOp> {
let (code, message) =
match password::do_bind(&self.ldap_info, request, self.get_login_handler()).await {
match password::do_bind(self.ldap_info, request, self.get_login_handler()).await {
Ok(user_id) => {
self.user_info = self
.backend_handler
@@ -211,7 +201,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
};
do_password_modification(
credentials,
&self.ldap_info,
self.ldap_info,
&self.backend_handler,
self.get_opaque_handler(),
&password_request,
@@ -257,7 +247,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
self.backend_handler
.get_readable_handler(credentials, &user_id)
},
&self.ldap_info,
self.ldap_info,
credentials,
request,
)
@@ -275,7 +265,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
code: LdapResultCode::InsufficentAccessRights,
message: "Unauthorized write".to_string(),
})?;
create::create_user_or_group(backend_handler, &self.ldap_info, request).await
create::create_user_or_group(backend_handler, self.ldap_info, request).await
}
#[instrument(skip_all, level = "debug")]
@@ -288,7 +278,7 @@ impl<Backend: BackendHandler + LoginHandler + OpaqueHandler> LdapHandler<Backend
code: LdapResultCode::InsufficentAccessRights,
message: "Unauthorized write".to_string(),
})?;
delete::delete_user_or_group(backend_handler, &self.ldap_info, request).await
delete::delete_user_or_group(backend_handler, self.ldap_info, request).await
}
#[instrument(skip_all, level = "debug")]
+1 -1
View File
@@ -7,7 +7,7 @@ pub(crate) mod modify;
pub(crate) mod password;
pub(crate) mod search;
pub use core::utils::{UserFieldType, map_group_field, map_user_field};
pub use core::utils::{LdapInfo, UserFieldType, map_group_field, map_user_field};
pub use handler::LdapHandler;
pub use core::group::get_default_group_object_classes;
+65 -757
View File
@@ -17,7 +17,7 @@ use lldap_domain::{
public_schema::PublicSchema,
types::{Group, UserAndGroups},
};
use tracing::{debug, instrument, warn};
use tracing::{debug, warn};
#[derive(Debug)]
enum SearchScope {
@@ -289,16 +289,10 @@ pub fn make_ldap_subschema_entry(schema: PublicSchema) -> LdapOp {
],
})
}
pub(crate) fn is_root_dse_request(request: &LdapSearchRequest) -> bool {
if request.base.is_empty() && request.scope == LdapSearchScope::Base {
if let LdapFilter::Present(attribute) = &request.filter {
if attribute.eq_ignore_ascii_case("objectclass") {
return true;
}
}
}
false
request.base.is_empty()
&& request.scope == LdapSearchScope::Base
&& matches!(&request.filter, LdapFilter::Present(attr) if attr.eq_ignore_ascii_case("objectclass"))
}
pub(crate) fn is_subschema_entry_request(request: &LdapSearchRequest) -> bool {
@@ -402,7 +396,6 @@ async fn do_search_internal(
})
}
#[instrument(skip_all, level = "debug")]
pub async fn do_search(
backend_handler: &impl UserAndGroupListerBackendHandler,
ldap_info: &LdapInfo,
@@ -441,16 +434,16 @@ mod tests {
core::error::LdapError,
handler::tests::{
make_group_search_request, make_user_search_request, setup_bound_admin_handler,
setup_bound_handler_with_group, setup_bound_readonly_handler,
setup_bound_readonly_handler,
},
};
use chrono::{DateTime, Duration, NaiveDateTime, TimeZone};
use chrono::{DateTime, Duration, NaiveDateTime, TimeZone, Utc};
use ldap3_proto::proto::{LdapDerefAliases, LdapSearchScope, LdapSubstringFilter};
use lldap_domain::{
schema::{AttributeList, AttributeSchema, Schema},
types::{
Attribute, AttributeName, AttributeType, GroupDetails, GroupId, JpegPhoto,
LdapObjectClass, User, UserId,
Attribute, AttributeName, AttributeType, GroupId, JpegPhoto, LdapObjectClass, User,
UserId,
},
uuid,
};
@@ -460,28 +453,6 @@ mod tests {
use mockall::predicate::eq;
use pretty_assertions::assert_eq;
#[tokio::test]
async fn test_search_root_dse() {
let ldap_handler = setup_bound_admin_handler(MockTestBackendHandler::new()).await;
let request = LdapSearchRequest {
base: "".to_string(),
scope: LdapSearchScope::Base,
aliases: LdapDerefAliases::Never,
sizelimit: 0,
timelimit: 0,
typesonly: false,
filter: LdapFilter::Present("objectClass".to_string()),
attrs: vec!["supportedExtension".to_string()],
};
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
root_dse_response("dc=example,dc=com"),
make_search_success()
])
);
}
fn assert_timestamp_within_margin(
timestamp_bytes: &[u8],
base_timestamp_dt: DateTime<Utc>,
@@ -504,6 +475,28 @@ mod tests {
);
}
#[tokio::test]
async fn test_search_root_dse() {
let ldap_handler = setup_bound_admin_handler(MockTestBackendHandler::new()).await;
let request = LdapSearchRequest {
base: "".to_string(),
scope: LdapSearchScope::Base,
aliases: LdapDerefAliases::Never,
sizelimit: 0,
timelimit: 0,
typesonly: false,
filter: LdapFilter::Present("objectClass".to_string()),
attrs: vec!["supportedExtension".to_string()],
};
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
root_dse_response("dc=example,dc=com"),
make_search_success()
])
);
}
#[tokio::test]
async fn test_subschema_response() {
let ldap_handler = setup_bound_admin_handler(MockTestBackendHandler::new()).await;
@@ -666,672 +659,6 @@ mod tests {
assert_eq!(actual_reponse[1], make_search_success());
}
#[tokio::test]
async fn test_search_regular_user() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::And(vec![
UserRequestFilter::And(Vec::new()),
UserRequestFilter::UserId(UserId::new("test")),
]))),
eq(false),
)
.times(1)
.return_once(|_, _| {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("test"),
..Default::default()
},
groups: None,
}])
});
let ldap_handler = setup_bound_handler_with_group(mock, "regular").await;
let request =
make_user_search_request::<String>(LdapFilter::And(vec![]), vec!["1.1".to_string()]);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=test,ou=people,dc=example,dc=com".to_string(),
attributes: vec![],
}),
make_search_success()
]),
);
}
#[tokio::test]
async fn test_search_readonly_user() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(eq(Some(UserRequestFilter::And(Vec::new()))), eq(false))
.times(1)
.return_once(|_, _| Ok(vec![]));
let ldap_handler = setup_bound_readonly_handler(mock).await;
let request =
make_user_search_request::<String>(LdapFilter::And(vec![]), vec!["1.1".to_string()]);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
#[tokio::test]
async fn test_search_member_of() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(eq(Some(UserRequestFilter::And(Vec::new()))), eq(true))
.times(1)
.return_once(|_, _| {
Ok(vec![UserAndGroups {
user: User {
user_id: UserId::new("bob"),
..Default::default()
},
groups: Some(vec![GroupDetails {
group_id: GroupId(42),
display_name: "rockstars".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
uuid: uuid!("a1a2a3a4b1b2c1c2d1d2d3d4d5d6d7d8"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}]),
}])
});
let ldap_handler = setup_bound_readonly_handler(mock).await;
let request = make_user_search_request::<String>(
LdapFilter::And(vec![]),
vec!["memberOf".to_string()],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=bob,ou=people,dc=example,dc=com".to_string(),
attributes: vec![LdapPartialAttribute {
atype: "memberOf".to_string(),
vals: vec![b"cn=rockstars,ou=groups,dc=example,dc=com".to_vec()]
}],
}),
make_search_success(),
]),
);
}
#[tokio::test]
async fn test_search_user_as_scope() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::And(vec![
UserRequestFilter::And(Vec::new()),
UserRequestFilter::UserId(UserId::new("bob")),
]))),
eq(false),
)
.times(1)
.return_once(|_, _| Ok(vec![]));
let ldap_handler = setup_bound_readonly_handler(mock).await;
let request = LdapSearchRequest {
base: "uid=bob,ou=people,Dc=example,dc=com".to_string(),
scope: LdapSearchScope::Base,
aliases: LdapDerefAliases::Never,
sizelimit: 0,
timelimit: 0,
typesonly: false,
filter: LdapFilter::And(vec![]),
attrs: vec!["1.1".to_string()],
};
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
#[tokio::test]
async fn test_search_users() {
use chrono::prelude::*;
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users().times(1).return_once(|_, _| {
Ok(vec![
UserAndGroups {
user: User {
user_id: UserId::new("bob_1"),
email: "bob@bobmail.bob".into(),
display_name: Some("Bôb Böbberson".to_string()),
uuid: uuid!("698e1d5f-7a40-3151-8745-b9b8a37839da"),
attributes: vec![
Attribute {
name: "first_name".into(),
value: "Bôb".to_string().into(),
},
Attribute {
name: "last_name".into(),
value: "Böbberson".to_string().into(),
},
],
..Default::default()
},
groups: None,
},
UserAndGroups {
user: User {
user_id: UserId::new("jim"),
email: "jim@cricket.jim".into(),
display_name: Some("Jimminy Cricket".to_string()),
attributes: vec![
Attribute {
name: "avatar".into(),
value: JpegPhoto::for_tests().into(),
},
Attribute {
name: "first_name".into(),
value: "Jim".to_string().into(),
},
Attribute {
name: "last_name".into(),
value: "Cricket".to_string().into(),
},
],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
creation_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
modified_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
password_modified_date: Utc
.with_ymd_and_hms(2014, 7, 8, 9, 10, 11)
.unwrap()
.naive_utc(),
},
groups: None,
},
])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_user_search_request(
LdapFilter::And(vec![]),
vec![
"objectClass",
"dn",
"uid",
"mail",
"givenName",
"sn",
"cn",
"createTimestamp",
"entryUuid",
"jpegPhoto",
],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=bob_1,ou=people,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec!["Bôb Böbberson".to_string().into_bytes()]
},
LdapPartialAttribute {
atype: "createTimestamp".to_string(),
vals: vec![b"1970-01-01T00:00:00+00:00".to_vec()]
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"698e1d5f-7a40-3151-8745-b9b8a37839da".to_vec()]
},
LdapPartialAttribute {
atype: "givenName".to_string(),
vals: vec!["Bôb".to_string().into_bytes()]
},
LdapPartialAttribute {
atype: "mail".to_string(),
vals: vec![b"bob@bobmail.bob".to_vec()]
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![
b"inetOrgPerson".to_vec(),
b"posixAccount".to_vec(),
b"mailAccount".to_vec(),
b"person".to_vec(),
b"customUserClass".to_vec(),
]
},
LdapPartialAttribute {
atype: "sn".to_string(),
vals: vec!["Böbberson".to_string().into_bytes()]
},
LdapPartialAttribute {
atype: "uid".to_string(),
vals: vec![b"bob_1".to_vec()]
},
],
}),
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "uid=jim,ou=people,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"Jimminy Cricket".to_vec()]
},
LdapPartialAttribute {
atype: "createTimestamp".to_string(),
vals: vec![b"2014-07-08T09:10:11+00:00".to_vec()]
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()]
},
LdapPartialAttribute {
atype: "givenName".to_string(),
vals: vec![b"Jim".to_vec()]
},
LdapPartialAttribute {
atype: "jpegPhoto".to_string(),
vals: vec![JpegPhoto::for_tests().into_bytes()]
},
LdapPartialAttribute {
atype: "mail".to_string(),
vals: vec![b"jim@cricket.jim".to_vec()]
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![
b"inetOrgPerson".to_vec(),
b"posixAccount".to_vec(),
b"mailAccount".to_vec(),
b"person".to_vec(),
b"customUserClass".to_vec(),
]
},
LdapPartialAttribute {
atype: "sn".to_string(),
vals: vec![b"Cricket".to_vec()]
},
LdapPartialAttribute {
atype: "uid".to_string(),
vals: vec![b"jim".to_vec()]
},
],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(Vec::new()))))
.times(1)
.return_once(|_| {
Ok(vec![
Group {
id: GroupId(1),
display_name: "group_1".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob"), UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
},
Group {
id: GroupId(3),
display_name: "BestGroup".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
},
])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::And(vec![]),
vec![
"objectClass",
"dn",
"cn",
"uniqueMember",
"entryUuid",
"entryDN",
],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"group_1".to_vec()]
},
LdapPartialAttribute {
atype: "entryDN".to_string(),
vals: vec![b"uid=group_1,ou=groups,dc=example,dc=com".to_vec()],
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()],
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![b"groupOfUniqueNames".to_vec(), b"groupOfNames".to_vec(),],
},
LdapPartialAttribute {
atype: "uniqueMember".to_string(),
vals: vec![
b"uid=bob,ou=people,dc=example,dc=com".to_vec(),
b"uid=john,ou=people,dc=example,dc=com".to_vec(),
]
},
],
}),
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=BestGroup,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![
LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"BestGroup".to_vec()]
},
LdapPartialAttribute {
atype: "entryDN".to_string(),
vals: vec![b"uid=BestGroup,ou=groups,dc=example,dc=com".to_vec()],
},
LdapPartialAttribute {
atype: "entryUuid".to_string(),
vals: vec![b"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_vec()],
},
LdapPartialAttribute {
atype: "objectClass".to_string(),
vals: vec![b"groupOfUniqueNames".to_vec(), b"groupOfNames".to_vec(),],
},
LdapPartialAttribute {
atype: "uniqueMember".to_string(),
vals: vec![b"uid=john,ou=people,dc=example,dc=com".to_vec()]
},
],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_by_groupid() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::GroupId(GroupId(1)))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
id: GroupId(1),
display_name: "group_1".into(),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![UserId::new("bob"), UserId::new("john")],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::Equality("groupid".to_string(), "1".to_string()),
vec!["dn"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_filter() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(vec![
GroupRequestFilter::DisplayName("group_1".into()),
GroupRequestFilter::Member(UserId::new("bob")),
GroupRequestFilter::DisplayName("rockstars".into()),
false.into(),
GroupRequestFilter::Uuid(uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc")),
true.into(),
true.into(),
true.into(),
true.into(),
GroupRequestFilter::Not(Box::new(false.into())),
false.into(),
GroupRequestFilter::DisplayNameSubString(SubStringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
}),
]))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::And(vec![
LdapFilter::Equality("cN".to_string(), "Group_1".to_string()),
LdapFilter::Equality(
"uniqueMember".to_string(),
"uid=bob,ou=peopLe,Dc=eXample,dc=com".to_string(),
),
LdapFilter::Equality(
"dn".to_string(),
"uid=rockstars,ou=groups,dc=example,dc=com".to_string(),
),
LdapFilter::Equality(
"dn".to_string(),
"uid=rockstars,ou=people,dc=example,dc=com".to_string(),
),
LdapFilter::Equality(
"uuid".to_string(),
"04ac75e0-2900-3e21-926c-2f732c26b3fc".to_string(),
),
LdapFilter::Equality("obJEctclass".to_string(), "groupofUniqueNames".to_string()),
LdapFilter::Equality("objectclass".to_string(), "groupOfNames".to_string()),
LdapFilter::Present("objectclass".to_string()),
LdapFilter::Present("dn".to_string()),
LdapFilter::Not(Box::new(LdapFilter::Present(
"random_attribUte".to_string(),
))),
LdapFilter::Equality("unknown_attribute".to_string(), "randomValue".to_string()),
LdapFilter::Substring(
"cn".to_owned(),
LdapSubstringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
},
),
]),
vec!["1.1"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_filter_2() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::Not(Box::new(GroupRequestFilter::DisplayName(
"group_2".into(),
))),
]))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: Vec::new(),
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}])
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::Or(vec![LdapFilter::Not(Box::new(LdapFilter::Equality(
"displayname".to_string(),
"group_2".to_string(),
)))]),
vec!["cn"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"group_1".to_vec()]
},],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_groups_filter_3() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::AttributeEquality(
AttributeName::from("attr"),
"TEST".to_string().into(),
),
GroupRequestFilter::AttributeEquality(
AttributeName::from("attr"),
"test".to_string().into(),
),
]))))
.times(1)
.return_once(|_| {
Ok(vec![Group {
display_name: "group_1".into(),
id: GroupId(1),
creation_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
users: vec![],
uuid: uuid!("04ac75e0-2900-3e21-926c-2f732c26b3fc"),
attributes: vec![Attribute {
name: "Attr".into(),
value: "TEST".to_string().into(),
}],
modified_date: chrono::Utc.timestamp_opt(42, 42).unwrap().naive_utc(),
}])
});
mock.expect_get_schema().returning(|| {
Ok(Schema {
user_attributes: AttributeList {
attributes: Vec::new(),
},
group_attributes: AttributeList {
attributes: vec![AttributeSchema {
name: "Attr".into(),
attribute_type: AttributeType::String,
is_list: false,
is_visible: true,
is_editable: true,
is_hardcoded: false,
is_readonly: false,
}],
},
extra_user_object_classes: Vec::new(),
extra_group_object_classes: Vec::new(),
})
});
let ldap_handler = setup_bound_admin_handler(mock).await;
let request = make_group_search_request(
LdapFilter::Equality("Attr".to_string(), "TEST".to_string()),
vec!["cn"],
);
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![
LdapOp::SearchResultEntry(LdapSearchResultEntry {
dn: "cn=group_1,ou=groups,dc=example,dc=com".to_string(),
attributes: vec![LdapPartialAttribute {
atype: "cn".to_string(),
vals: vec![b"group_1".to_vec()]
},],
}),
make_search_success(),
])
);
}
#[tokio::test]
async fn test_search_group_as_scope() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(vec![
GroupRequestFilter::And(Vec::new()),
GroupRequestFilter::DisplayName("rockstars".into()),
]))))
.times(1)
.return_once(|_| Ok(vec![]));
let ldap_handler = setup_bound_readonly_handler(mock).await;
let request = LdapSearchRequest {
base: "uid=rockstars,ou=groups,Dc=example,dc=com".to_string(),
scope: LdapSearchScope::Base,
aliases: LdapDerefAliases::Never,
sizelimit: 0,
timelimit: 0,
typesonly: false,
filter: LdapFilter::And(vec![]),
attrs: vec!["1.1".to_string()],
};
assert_eq!(
ldap_handler.do_search_or_dse(&request).await,
Ok(vec![make_search_success()]),
);
}
#[tokio::test]
async fn test_search_groups_unsupported_substring() {
let ldap_handler = setup_bound_readonly_handler(MockTestBackendHandler::new()).await;
@@ -1370,11 +697,9 @@ mod tests {
async fn test_search_groups_error() {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::Or(vec![
GroupRequestFilter::Not(Box::new(GroupRequestFilter::DisplayName(
"group_2".into(),
))),
]))))
.with(eq(Some(GroupRequestFilter::Not(Box::new(
GroupRequestFilter::DisplayName("group_2".into()),
)))))
.times(1)
.return_once(|_| {
Err(lldap_domain_model::error::DomainError::InternalError(
@@ -1422,44 +747,34 @@ mod tests {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::And(vec![UserRequestFilter::Or(
vec![
UserRequestFilter::Not(Box::new(UserRequestFilter::UserId(UserId::new(
"bob",
)))),
UserRequestFilter::UserId("bob_1".to_string().into()),
false.into(),
true.into(),
false.into(),
true.into(),
true.into(),
false.into(),
UserRequestFilter::Or(vec![
UserRequestFilter::AttributeEquality(
AttributeName::from("first_name"),
"FirstName".to_string().into(),
),
UserRequestFilter::AttributeEquality(
AttributeName::from("first_name"),
"firstname".to_string().into(),
),
]),
false.into(),
UserRequestFilter::UserIdSubString(SubStringFilter {
eq(Some(UserRequestFilter::Or(vec![
UserRequestFilter::Not(Box::new(UserRequestFilter::UserId(UserId::new("bob")))),
UserRequestFilter::UserId("bob_1".to_string().into()),
true.into(),
true.into(),
true.into(),
UserRequestFilter::AttributeEquality(
AttributeName::from("first_name"),
"FirstName".to_string().into(),
),
UserRequestFilter::AttributeEquality(
AttributeName::from("first_name"),
"firstname".to_string().into(),
),
UserRequestFilter::UserIdSubString(SubStringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
}),
UserRequestFilter::SubString(
UserColumn::DisplayName,
SubStringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
}),
UserRequestFilter::SubString(
UserColumn::DisplayName,
SubStringFilter {
initial: Some("iNIt".to_owned()),
any: vec!["1".to_owned(), "2aA".to_owned()],
final_: Some("finAl".to_owned()),
},
),
],
)]))),
},
),
]))),
eq(false),
)
.times(1)
@@ -1571,6 +886,7 @@ mod tests {
Ok(vec![make_search_success()])
);
}
#[tokio::test]
async fn test_search_member_of_filter_error() {
let mut mock = MockTestBackendHandler::new();
@@ -1598,11 +914,9 @@ mod tests {
let mut mock = MockTestBackendHandler::new();
mock.expect_list_users()
.with(
eq(Some(UserRequestFilter::And(vec![UserRequestFilter::Or(
vec![UserRequestFilter::Not(Box::new(
UserRequestFilter::Equality(UserColumn::DisplayName, "bob".to_string()),
))],
)]))),
eq(Some(UserRequestFilter::Not(Box::new(
UserRequestFilter::Equality(UserColumn::DisplayName, "bob".to_string()),
)))),
eq(false),
)
.times(1)
@@ -1709,7 +1023,7 @@ mod tests {
}])
});
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(Vec::new()))))
.with(eq(Some(GroupRequestFilter::True)))
.times(1)
.return_once(|_| {
Ok(vec![Group {
@@ -1795,7 +1109,7 @@ mod tests {
}])
});
mock.expect_list_groups()
.with(eq(Some(GroupRequestFilter::And(Vec::new()))))
.with(eq(Some(GroupRequestFilter::True)))
.returning(|_| {
Ok(vec![Group {
id: GroupId(1),
@@ -1830,13 +1144,7 @@ mod tests {
},
LdapPartialAttribute {
atype: "createtimestamp".to_string(),
vals: vec![
chrono::Utc
.timestamp_opt(0, 0)
.unwrap()
.to_rfc3339()
.into_bytes(),
],
vals: vec![b"19700101000000Z".to_vec()],
},
LdapPartialAttribute {
atype: "entryuuid".to_string(),
+1
View File
@@ -7,6 +7,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[features]
test = []
+1
View File
@@ -7,6 +7,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[features]
test = []
@@ -2,7 +2,11 @@ use crate::sql_backend_handler::SqlBackendHandler;
use async_trait::async_trait;
use lldap_domain::{
requests::{CreateUserRequest, UpdateUserRequest},
types::{AttributeName, GroupDetails, GroupId, Serialized, User, UserAndGroups, UserId, Uuid},
schema::Schema,
types::{
Attribute, AttributeName, GroupDetails, GroupId, Serialized, User, UserAndGroups, UserId,
Uuid,
},
};
use lldap_domain_handlers::handler::{
ReadSchemaBackendHandler, UserBackendHandler, UserListerBackendHandler, UserRequestFilter,
@@ -185,20 +189,12 @@ impl UserListerBackendHandler for SqlBackendHandler {
}
impl SqlBackendHandler {
async fn update_user_with_transaction(
transaction: &DatabaseTransaction,
request: UpdateUserRequest,
) -> Result<()> {
let lower_email = request.email.as_ref().map(|s| s.as_str().to_lowercase());
let now = chrono::Utc::now().naive_utc();
let update_user = model::users::ActiveModel {
user_id: ActiveValue::Set(request.user_id.clone()),
email: request.email.map(ActiveValue::Set).unwrap_or_default(),
lowercase_email: lower_email.map(ActiveValue::Set).unwrap_or_default(),
display_name: to_value(&request.display_name),
modified_date: ActiveValue::Set(now),
..Default::default()
};
fn compute_user_attribute_changes(
user_id: &UserId,
insert_attributes: Vec<Attribute>,
delete_attributes: Vec<AttributeName>,
schema: &Schema,
) -> Result<(Vec<model::user_attributes::ActiveModel>, Vec<AttributeName>)> {
let mut update_user_attributes = Vec::new();
let mut remove_user_attributes = Vec::new();
let mut process_serialized =
@@ -208,24 +204,20 @@ impl SqlBackendHandler {
}
ActiveValue::Set(_) => {
update_user_attributes.push(model::user_attributes::ActiveModel {
user_id: Set(request.user_id.clone()),
user_id: Set(user_id.clone()),
attribute_name: Set(attribute_name),
value,
})
}
_ => unreachable!(),
};
let schema = Self::get_schema_with_transaction(transaction).await?;
for attribute in request.insert_attributes {
for attribute in insert_attributes {
if schema
.user_attributes
.get_attribute_type(&attribute.name)
.is_some()
{
process_serialized(
ActiveValue::Set(attribute.value.into()),
attribute.name.clone(),
);
process_serialized(ActiveValue::Set(attribute.value.into()), attribute.name);
} else {
return Err(DomainError::InternalError(format!(
"User attribute name {} doesn't exist in the schema, yet was attempted to be inserted in the database",
@@ -233,7 +225,7 @@ impl SqlBackendHandler {
)));
}
}
for attribute in request.delete_attributes {
for attribute in delete_attributes {
if schema
.user_attributes
.get_attribute_type(&attribute)
@@ -246,6 +238,31 @@ impl SqlBackendHandler {
)));
}
}
Ok((update_user_attributes, remove_user_attributes))
}
async fn update_user_with_transaction(
transaction: &DatabaseTransaction,
request: UpdateUserRequest,
) -> Result<()> {
let schema = Self::get_schema_with_transaction(transaction).await?;
let (update_user_attributes, remove_user_attributes) =
Self::compute_user_attribute_changes(
&request.user_id,
request.insert_attributes,
request.delete_attributes,
&schema,
)?;
let lower_email = request.email.as_ref().map(|s| s.as_str().to_lowercase());
let now = chrono::Utc::now().naive_utc();
let update_user = model::users::ActiveModel {
user_id: ActiveValue::Set(request.user_id.clone()),
email: request.email.map(ActiveValue::Set).unwrap_or_default(),
lowercase_email: lower_email.map(ActiveValue::Set).unwrap_or_default(),
display_name: to_value(&request.display_name),
modified_date: ActiveValue::Set(now),
..Default::default()
};
update_user.update(transaction).await?;
if !remove_user_attributes.is_empty() {
model::UserAttributes::delete_many()
+1
View File
@@ -6,6 +6,7 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
async-trait = "0.1"
+1
View File
@@ -7,3 +7,4 @@ edition.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
+5 -5
View File
@@ -68,7 +68,7 @@ services:
- LLDAP_JWT_SECRET=REPLACE_WITH_RANDOM
- LLDAP_KEY_SEED=REPLACE_WITH_RANDOM
- LLDAP_LDAP_BASE_DN=dc=example,dc=com
- LLDAP_LDAP_USER_PASS=adminPas$word
- LLDAP_LDAP_USER_PASS=CHANGE_ME # If the password contains '$', escape it (e.g. Pas$$word sets Pas$word)
# If using LDAPS, set enabled true and configure cert and key path
# - LLDAP_LDAPS_OPTIONS__ENABLED=true
# - LLDAP_LDAPS_OPTIONS__CERT_FILE=/path/to/certfile.crt
@@ -93,7 +93,7 @@ front-end.
### With Podman
LLDAP works well with rootless Podman either through command line deployment
or using [quadlets](example_configs/podman-quadlets/). The example quadlets
or using [quadlets](../example_configs/podman-quadlets/). The example quadlets
include configuration with postgresql and file based secrets, but have comments
for several other deployment strategies.
@@ -102,7 +102,7 @@ for several other deployment strategies.
See https://github.com/Evantage-WS/lldap-kubernetes for a LLDAP deployment for Kubernetes
You can bootstrap your lldap instance (users, groups)
using [bootstrap.sh](example_configs/bootstrap/bootstrap.md#kubernetes-job).
using [bootstrap.sh](../example_configs/bootstrap/bootstrap.md#kubernetes-job).
It can be run by Argo CD for managing users in git-opt way, or as a one-shot job.
### From a package repository
@@ -114,7 +114,7 @@ Depending on the distribution you use, it might be possible to install LLDAP
from a package repository, officially supported by the distribution or
community contributed.
Each package offers a [systemd service](https://wiki.archlinux.org/title/systemd#Using_units) `lldap.service` or [rc.d_lldap](example_configs/freebsd/rc.d_lldap) `rc.d/lldap` to (auto-)start and stop lldap.<br>
Each package offers a [systemd service](https://wiki.archlinux.org/title/systemd#Using_units) `lldap.service` or [rc.d_lldap](../example_configs/freebsd/rc.d_lldap) `rc.d/lldap` to (auto-)start and stop lldap.<br>
When using the distributed packages, the default login is `admin/password`. You can change that from the web UI after starting the service.
<details>
@@ -385,7 +385,7 @@ arguments to `cargo run`. Have a look at the docker template:
`lldap_config.docker_template.toml`.
You can also install it as a systemd service, see
[lldap.service](example_configs/lldap.service).
[lldap.service](../example_configs/lldap.service).
### Cross-compilation
+71
View File
@@ -0,0 +1,71 @@
# Nix Development Environment
LLDAP provides a Nix flake that sets up a complete development environment with all necessary tools and dependencies.
## Requirements
- [Nix](https://nixos.org/download.html) with flakes enabled
- (Optional) [direnv](https://direnv.net/) for automatic environment activation
## Usage
```bash
# Clone the repository
git clone https://github.com/lldap/lldap.git
cd lldap
# Enter the development environment
nix develop
# Build the workspace
cargo build --workspace
# Run tests
cargo test --workspace
# Check formatting and linting
cargo fmt --check --all
cargo clippy --tests --workspace -- -D warnings
# Build frontend
./app/build.sh
# Export GraphQL schema (if needed)
./export_schema.sh
# Start development server
cargo run -- run --config-file lldap_config.docker_template.toml
```
## Building with Nix
You can also build LLDAP directly using Nix:
```bash
# Build the default package (server)
nix build
# Build and run
nix run
```
## Development Shells
The flake provides two development shells:
- `default` - Full development environment
- `ci` - Minimal environment similar to CI
```bash
# Use the CI-like environment
nix develop .#ci
```
## Automatic Environment Activation (Optional)
For automatic environment activation when entering the project directory:
1. Install direnv: `nix profile install nixpkgs#direnv`
2. Set up direnv shell hook in your shell configuration
3. Navigate to the project directory and allow direnv: `direnv allow`
4. The environment will automatically activate when entering the directory
+1
View File
@@ -51,6 +51,7 @@ configuration files:
- [Peertube](peertube.md)
- [Penpot](penpot.md)
- [pgAdmin](pgadmin.md)
- [Pocket-ID](pocket-id.md)
- [Portainer](portainer.md)
- [PowerDNS Admin](powerdns_admin.md)
- [Prosody](prosody.md)
+11 -1
View File
@@ -64,7 +64,7 @@ dc=example,dc=com
# Additional settings
## Group
## Parent Group
```
---------
```
@@ -99,6 +99,16 @@ ou=groups
member
```
## User membership attribute
```
distinguishedName
```
## Looking using user attribute
```
false
```
## Object uniqueness field
```
uid
+4 -27
View File
@@ -130,7 +130,7 @@ Fields description:
"isVisible": true
},
{
"name": "mail_alias",
"name": "mail-alias",
"attributeType": "STRING",
"isEditable": false,
"isList": true,
@@ -246,14 +246,14 @@ spec:
restartPolicy: OnFailure
containers:
- name: lldap-bootstrap
image: lldap/lldap:v0.5.0
image: lldap/lldap:latest
command:
- /bootstrap/bootstrap.sh
- /app/bootstrap.sh
env:
- name: LLDAP_URL
value: "http://lldap:8080"
value: "http://lldap:17170"
- name: LLDAP_ADMIN_USERNAME
valueFrom: { secretKeyRef: { name: lldap-admin-user, key: username } }
@@ -265,11 +265,6 @@ spec:
value: "true"
volumeMounts:
- name: bootstrap
mountPath: /bootstrap/bootstrap.sh
readOnly: true
subPath: bootstrap.sh
- name: user-configs
mountPath: /bootstrap/user-configs
readOnly: true
@@ -279,27 +274,9 @@ spec:
readOnly: true
volumes:
- name: bootstrap
configMap:
name: bootstrap
defaultMode: 0555
items:
- key: bootstrap.sh
path: bootstrap.sh
- name: user-configs
projected:
sources:
- secret:
name: lldap-admin-user
items:
- key: user-config.json
path: admin-config.json
- secret:
name: lldap-password-manager-user
items:
- key: user-config.json
path: password-manager-config.json
- secret:
name: lldap-bootstrap-configs
items:
+46
View File
@@ -0,0 +1,46 @@
# Gogs LDAP configuration
Gogs can make use of LDAP and therefore lldap.
The following configuration is adapted from the example configuration at [their repository](https://github.com/gogs/gogs/blob/main/conf/auth.d/ldap_bind_dn.conf.example).
The example is a container configuration - the file should live within `conf/auth.d/some_name.conf`:
```yaml
$ cat /srv/git/gogs/conf/auth.d/ldap_bind_dn.conf
id = 101
type = ldap_bind_dn
name = LDAP BindDN
is_activated = true
is_default = true
[config]
host = ldap.example.com
port = 6360
# 0 - Unencrypted, 1 - LDAPS, 2 - StartTLS
security_protocol = 1
# You either need to install the LDAPS certificate into your trust store -
# Or skip verification altogether - for a restricted container deployment a sane default.
skip_verify = true
bind_dn = uid=<binduser>,ou=people,dc=example,dc=com
bind_password = `yourPasswordInBackticks`
user_base = dc=example,dc=com
attribute_username = uid
attribute_name = givenName
attribute_surname = sn
attribute_mail = mail
attributes_in_bind = false
# restricts on the `user_base`.
filter = (&(objectClass=person)(uid=%s))
# The initial administrator has to enable admin privileges.
# This is only possible for users who were logged in once.
# This renders the following filter obsolete; Though its response is accepted by Gogs.
admin_filter = (memberOf=cn=<yourAdminGroup>,ou=groups,dc=example,dc=com)
```
The `binduser` shall be a member of `lldap_strict_readonly`.
The group `yourAdminGroup` should be adapted to your requirement - Otherwise the entire line can be omitted.
The diamond brackets are for readability and are not required.
## Tested on Gogs
v0.14+dev via podman 4.3.1
+8 -4
View File
@@ -58,9 +58,9 @@ services:
- LDAP_SEARCH_BASE=ou=people,dc=example,dc=com
- LDAP_BIND_DN=uid=admin,ou=people,dc=example,dc=com
- LDAP_BIND_PW=adminpassword
- LDAP_QUERY_FILTER_USER=(&(objectClass=inetOrgPerson)(|(uid=%u)(mail=%u)))
- LDAP_QUERY_FILTER_USER=(&(objectClass=inetOrgPerson)(mail=%s))
- LDAP_QUERY_FILTER_GROUP=(&(objectClass=groupOfUniqueNames)(uid=%s))
- LDAP_QUERY_FILTER_ALIAS=(&(objectClass=inetOrgPerson)(|(uid=%u)(mail=%u)))
- LDAP_QUERY_FILTER_ALIAS=(&(objectClass=inetOrgPerson)(mail=%s))
- LDAP_QUERY_FILTER_DOMAIN=(mail=*@%s)
# <<< Postfix LDAP Integration
# >>> Dovecot LDAP Integration
@@ -78,7 +78,8 @@ services:
container_name: roundcubemail
restart: always
volumes:
- roundcube_data:/var/www/html
- roundcube_config:/var/roundcube/config
- roundcube_plugins:/var/www/html/plugins
ports:
- "9002:80"
environment:
@@ -86,12 +87,15 @@ services:
- ROUNDCUBEMAIL_SKIN=elastic
- ROUNDCUBEMAIL_DEFAULT_HOST=mailserver # IMAP
- ROUNDCUBEMAIL_SMTP_SERVER=mailserver # SMTP
- ROUNDCUBEMAIL_COMPOSER_PLUGINS=roundcube/carddav
- ROUNDCUBEMAIL_PLUGINS=carddav
volumes:
mailserver-data:
mailserver-config:
mailserver-state:
lldap_data:
roundcube_data:
roundcube_config:
roundcube_plugins:
```
+31
View File
@@ -0,0 +1,31 @@
# Open-WebUI LDAP configuration
For the GUI settings (recommended) go to:
`Admin Panel > General`.
There you find the LDAP config.
For the initial activation, restart OpenWebUI to load the LDAP module.
The following configurations have to be provided.
The user `binduser` has to be member of `lldap_strict_readonly`.
| environment variable | GUI variable | example value | elaboration |
|----------------------|--------------|---------------|-------------|
| `ENABLE_LDAP` | LDAP | `true` | Toggle |
| `LDAP_SERVER_LABEL` | Label | `any` (lldap) | name |
| `LDAP_SERVER_HOST` | Host | `ldap.example.org` | IP/domain without scheme or port |
| `LDAP_SERVER_PORT` | Port | `6360` | When starting Open-WebUI sometimes it only accepts the default LDAP or LDAPS port (only ENV configuration) |
| `LDAP_ATTRIBUTE_FOR_MAIL` | Attribute for Mail | `mail` | default |
| `LDAP_ATTRIBUTE_FOR_USERNAME` | Attribute for Username | `uid` | default |
| `LDAP_APP_DN` | Application DN | `uid=binduser,ou=people,dc=example,dc=org` | Hovering shows: Bind user-dn |
| `LDAP_APP_PASSWORD` | Application DN Password | `<binduser-pw>` | - |
| `LDAP_SEARCH_BASE` | Search Base | `ou=people,dc=example,dc=org` | Who should get access from your instance. |
| `LDAP_SEARCH_FILTER` | Search Filter | `(objectClass=person)` or `(\|(objectClass=person)(memberOf=cn=webui-members,ou=groups,dc=example,dc=org))` | Query for Open WebUI account names. |
| `LDAP_USE_TLS` | TLS | `true` | Should be `true` for LDAPS, `false` for plain LDAP |
| `LDAP_CA_CERT_FILE` | Certificate Path | `/ca-chain.pem` | required when TLS activated |
| `LDAP_VALIDATE_CERT` | Validate Certificate | `true` | Set to `false` for self-signed certificates |
| `LDAP_CIPHERS` | Ciphers | ALL | default |
## Tested on Open WebUI
v0.6.26 via podman 5.4.2
+3
View File
@@ -92,6 +92,9 @@ Enable the following options on the OPNsense configuration page for your LLDAP s
- Synchronize groups: `Checked`
- Automatic user creation: `Checked`
### Constraint Groups
This limits the groups to prevent injection attacks. If you want to enable this feature, you need to add ou=groups,dc=example,dc=com to the Authentication Containers field. Be sure to separate with a semicolon. Otherwise disable this option.
### Create OPNsense Group
Go to `System > Access > Groups` and create a new group with the **same** name as the LLDAP group used to authenticate users for OPNsense.
+146 -47
View File
@@ -1,40 +1,55 @@
# Getting Started with UNIX PAM using SSSD
This guide was tested with LDAPS on debian 12 with SSSD 2.8.2 and certificates signed by a registered CA.
## Configuring LLDAP
### Configure LDAPS
You **must** use LDAPS. You MUST NOT use plain LDAP. Even over a private network this costs you nearly nothing, and passwords will be sent in PLAIN TEXT without it.
Even in private networks you **should** configure LLDAP to communicate over HTTPS, otherwise passwords will be
transmitted in plain text. Just using a self-signed certificate will drastically improve security.
```jsx
You can generate an SSL certificate for LLDAP with the following command. The `subjectAltName` is **required**. Make
sure all domains are listed there, even your `CN`.
```bash
openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 36500 -subj "/CN=ldap.example.com" -addext "subjectAltName = DNS:ldap.example.com"
```
With the generated certificates for your domain, copy the certificates and enable ldaps in the LLDAP configuration.
```
[ldaps_options]
enabled=true
port=6360
port=636
cert_file="cert.pem"
key_file="key.pem"
```
You can generate an SSL certificate for it with the following command. The `subjectAltName` is REQUIRED. Make sure all domains are listed there, even your `CN`.
### Setting up custom attributes
```bash
openssl req -x509 -nodes -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 36500 -nodes -subj "/CN=lldap.example.net" -addext "subjectAltName = DNS:lldap.example.net"
```
SSSD makes use of the `posixAccount` and `sshPublicKey` object types, their attributes have to be created manually in
LLDAP.
### Setting up the custom attributes
Add the following custom attributes to the **User schema**.
You will need to add the following custom attributes to the **user schema**.
| Attribute | Type | Multiple | Example |
|---------------|---------|:--------:|------------|
| uidNumber | integer | | 3000 |
| gidNumber | integer | | 3000 |
| homeDirectory | string | | /home/user |
| unixShell | string | | /bin/bash |
| sshPublicKey | string | X | *sshKey* |
- uidNumber (integer)
- gidNumber (integer, multiple values)
- homeDirectory (string)
- unixShell (string)
- sshPublicKey (string) (only if youre setting up SSH Public Key Sync)
Add the following custom attributes to the **Group schema.**
You will need to add the following custom attributes to the **group schema.**
| Attribute | Type | Multiple | Example |
|---------------|---------|:--------:|------------|
| gidNumber | integer | | 3000 |
- gidNumber (integer)
You will now need to populate these values for all the users you wish to be able to login.
The only optional attributes are `unixShell` and `sshPublicKey`. All other attributes **must** be fully populated for
each group and user being used by SSSD. The `gidNumber` of the user schema represents the users primary group. To add
more groups to a user, add the user to groups with a `gidNumber` set.
## Client setup
@@ -45,25 +60,113 @@ You need to install the packages `sssd` `sssd-tools` `libnss-sss` `libpam-sss` `
E.g. on Debian/Ubuntu
```bash
sudo apt update; sudo apt install -y sssd sssd-tools libnss-sss libpam-sss libsss-sudo
sudo apt install -y sssd sssd-tools libnss-sss libpam-sss libsss-sudo
```
### Configure the client packages
Use your favourite text editor to create/open the file `/etc/sssd/sssd.conf` .
This example makes the following assumptions which need to be adjusted:
E.g. Using nano
* Domain: `example.com`
* Domain Component: `dc=example,dc=com`
* LDAP URL: `ldaps://ldap.example.com/`
* Bind Username: `binduser`
* Bind Password: `bindpassword`
The global config filters **out** the root user and group. It also restricts the number of failed login attempts
with cached credentials if the server is unreachable.
Use your favourite text editor to create the SSSD global configuration:
```bash
sudo nano /etc/sssd/sssd.conf
```
Insert the contents of the provided template (sssd.conf), but you will need to change some of the configuration in the file. Comments have been made to guide you. The config file is an example if your LLDAP server is hosted at `lldap.example.com` and your domain is `example.com` with your dc being `dc=example,dc=com`.
```
[sssd]
config_file_version = 2
services = nss, pam, ssh
domains = example.com
SSSD will **refuse** to run if its config file is world-readable, so apply the following permissions to it:
[nss]
filter_users = root
filter_groups = root
[pam]
offline_failed_login_attempts = 3
offline_failed_login_delay = 5
[ssh]
```
The following domain configuration is set up for the LLDAP `RFC2307bis` schema and the custom attributes created at the
beginning of the guide. It allows all configured LDAP users to log in by default while filtering out users and groups
which don't have their posix IDs set.
Because caching is enabled make sure to check the [Debugging](#Debugging) section on how to
flush the cache if you are having problems.
Create a separate configuration file for your domain.
```bash
sudo nano /etc/sssd/conf.d/example.com.conf
```
```
[domain/example.com]
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
access_provider = permit
enumerate = True
cache_credentials = True
# ldap provider
ldap_uri = ldaps://ldap.example.com/
ldap_schema = rfc2307bis
ldap_search_base = dc=example,dc=com
ldap_default_bind_dn = uid=binduser,ou=people,dc=example,dc=com
ldap_default_authtok = bindpassword
# For certificates signed by a registered CA
ldap_tls_cacert = /etc/ssl/certs/ca-certificates.crt
# For self signed certificates
# ldap_tls_cacert = cert.pem
ldap_tls_reqcert = demand
# users
ldap_user_search_base = ou=people,dc=example,dc=com?subtree?(uidNumber=*)
ldap_user_object_class = posixAccount
ldap_user_name = uid
ldap_user_gecos = cn
ldap_user_uid_number = uidNumber
ldap_user_gid_number = gidNumber
ldap_user_home_directory = homeDirectory
ldap_user_shell = unixShell
ldap_user_ssh_public_key = sshPublicKey
# groups
ldap_group_search_base = ou=groups,dc=example,dc=com?subtree?(gidNumber=*)
ldap_group_object_class = groupOfUniqueNames
ldap_group_name = cn
ldap_group_gid_number = gidNumber
ldap_group_member = uniqueMember
```
SSSD will **refuse** to run if its config files have the wrong permissions, so apply the following permissions to the
files:
```bash
sudo chmod 600 /etc/sssd/sssd.conf
sudo chmod 600 /etc/sssd/conf.d/example.com.conf
```
Enable automatic creation of home directories:
```bash
sudo pam-auth-update --enable mkhomedir
```
Restart SSSD to apply any changes:
@@ -72,26 +175,11 @@ Restart SSSD to apply any changes:
sudo systemctl restart sssd
```
Enable automatic creation of home directories
```bash
sudo pam-auth-update --enable mkhomedir
```
## Permissions and SSH Key sync
### SSH Key Sync
In order to do this, you need to setup the custom attribute `sshPublicKey` in the user schema. Then, you must uncomment the following line in the SSSD config file (assuming you are using the provided template):
```bash
sudo nano /etc/sssd/sssd.conf
```
```jsx
ldap_user_ssh_public_key = sshPublicKey
```
And the following to the bottom of your OpenSSH config file:
Add the following to the bottom of your OpenSSH config file:
```bash
sudo nano /etc/ssh/sshd_config
@@ -111,11 +199,15 @@ sudo systemctl restart sssd
### Permissions Sync
Linux often manages permissions to tools such as Sudo and Docker based on group membership. There are two possible ways to achieve this.
Linux often manages permissions to tools such as Sudo and Docker based on group membership. There are two possible ways
to achieve this.
**Number 1**
**Option 1**
**If all your client systems are setup identically,** you can just check the group id of the local group, i.e. Sudo being 27 on most Debian and Ubuntu installs, and set that as the gid in LLDAP. For tools such as docker, you can create a group before install with a custom gid on the system, which must be the same on all, and use that GID on the LLDAP group
**If all your client systems are set up identically,** you can just check the group id of the local group, i.e. `sudo`
being 27 on most Debian and Ubuntu installs, and set that as the gid in LLDAP.
For tools such as docker, you can create a group before install with a custom gid on the system, which must be the same
on all, and use that GID on the LLDAP group
Sudo
@@ -123,15 +215,16 @@ Sudo
Docker
```jsx
```bash
sudo groupadd docker -g 722
```
![image](https://github.com/user-attachments/assets/face88d0-5a20-4442-a5e3-9f6a1ae41b68)
**Number 2**
**Option 2**
Create a group in LLDAP that you would like all your users who have sudo access to be in, and add the following to the bottom of `/etc/sudoers` .
Create a group in LLDAP that you would like all your users who have sudo access to be in, and add the following to the
bottom of `/etc/sudoers` .
E.g. if your group is named `lldap_sudo`
@@ -143,15 +236,21 @@ E.g. if your group is named `lldap_sudo`
To verify your config files validity, you can run the following command
```jsx
```bash
sudo sssctl config-check
```
To flush SSSDs cache
```jsx
```bash
sudo sss_cache -E
```
Man pages
```bash
man sssd
man sssd-ldap
```
## Final Notes
To see the old guide for NSLCD, go to NSLCD.md.
+27
View File
@@ -0,0 +1,27 @@
# LLDAP Configuration for Pocket-ID
[Pocket-ID](https://pocket-id.org/) is a simple, easy-to-use OIDC provider that lets users authenticate to your services using passkeys.
| | | Value |
|-----------------------|------------------------------------|-----------------------------------------------------------|
| **Client Configuration** | LDAP URL | ldaps://url:port
| | LDAP Bind DN | uid=binduser,ou=people,dc=example,dc=com |
| | LDAP Bind Password | password for binduser |
| | LDAP Base DN | dc=example,dc=com |
| | User Search Filter | (objectClass=person) |
| | Groups Search Filter | (objectClass=groupOfNames) |
| | Skip Certificate Verification | true/false |
| | Keep disabled users from LDAP | false |
| **Attribute Mapping** | User Unique Identifier Attribute | uuid |
| | Username Attribute | uid |
| | User Mail Attribute | mail |
| | User First Name Attribute | givenName |
| | User Last Name Attribute | sn |
| | User Profile Picture Attribute | jpegPhoto |
| | Group Members Attribute | member |
| | Group Unique Identifier Attribute | uuid |
| | Group Name Attribute | cn |
| | Admin Group Name | pocketid_admin_group_name |
Save and Sync.
+6 -6
View File
@@ -31,13 +31,13 @@ Starting `lldap.service` will start all the other services, but stopping it will
- At this point, you should be able to start the container.
- Test this with:
```bash
$ podman --user daemon-reload
$ podman --user start lldap
$ podman --user status lldap
$ systemctl --user daemon-reload
$ systemctl --user start lldap
$ systemctl --user status lldap
```
- Assuming it launched correctly, you should now stop it again.
```bash
$ podman --user stop lldap
$ systemctl --user stop lldap
```
- Make any adjustments you feel are necessary to the network files.
- Now all that's left to do is the [bootstrapping process](../bootstrap/bootstrap.md#docker-compose):
@@ -45,8 +45,8 @@ Starting `lldap.service` will start all the other services, but stopping it will
- Toward the end of the container section, uncomment the lines in `lldap.container` regarding the bootstrap process.
- Start the container:
```bash
$ podman --user daemon-reload
$ podman --user start lldap
$ systemctl --user daemon-reload
$ systemctl --user start lldap
```
- Attach a terminal to the container, and run `bootstrap.sh`:
```bash
+8 -2
View File
@@ -56,9 +56,15 @@ ou=groups,dc=example,dc=com
```
#### Group Membership Attribute
```
cn
uniqueMember
```
#### Group Filter
Is optional:
```
is optional
(objectClass=groupofuniquenames)
```
## Admin group search configurations
Use the same configurations as above to grant each users admin rights in their respective teams.
You can then also fetch all groups, and select which groups have universal admin rights.
Generated
+98
View File
@@ -0,0 +1,98 @@
{
"nodes": {
"crane": {
"locked": {
"lastModified": 1757183466,
"narHash": "sha256-kTdCCMuRE+/HNHES5JYsbRHmgtr+l9mOtf5dpcMppVc=",
"owner": "ipetkov",
"repo": "crane",
"rev": "d599ae4847e7f87603e7082d73ca673aa93c916d",
"type": "github"
},
"original": {
"owner": "ipetkov",
"repo": "crane",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1757487488,
"narHash": "sha256-zwE/e7CuPJUWKdvvTCB7iunV4E/+G0lKfv4kk/5Izdg=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "ab0f3607a6c7486ea22229b92ed2d355f1482ee0",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"crane": "crane",
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"rust-overlay": "rust-overlay"
}
},
"rust-overlay": {
"inputs": {
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1757730403,
"narHash": "sha256-Jxl4OZRVsXs8JxEHUVQn3oPu6zcqFyGGKaFrlNgbzp0=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "3232f7f8bd07849fc6f4ae77fe695e0abb2eba2c",
"type": "github"
},
"original": {
"owner": "oxalica",
"repo": "rust-overlay",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}
+162
View File
@@ -0,0 +1,162 @@
{
description = "LLDAP - Light LDAP implementation for authentication";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs.nixpkgs.follows = "nixpkgs";
};
crane = {
url = "github:ipetkov/crane";
};
};
outputs = { self, nixpkgs, flake-utils, rust-overlay, crane }:
flake-utils.lib.eachDefaultSystem (system:
let
overlays = [ (import rust-overlay) ];
pkgs = import nixpkgs {
inherit system overlays;
};
# MSRV from the project
rustVersion = "1.89.0";
# Rust toolchain with required components
rustToolchain = pkgs.rust-bin.stable.${rustVersion}.default.override {
extensions = [ "rust-src" "clippy" "rustfmt" ];
targets = [
"wasm32-unknown-unknown"
"x86_64-unknown-linux-musl"
"aarch64-unknown-linux-musl"
"armv7-unknown-linux-musleabihf"
];
};
craneLib = crane.lib.${system}.overrideToolchain rustToolchain;
# Common build inputs
nativeBuildInputs = with pkgs; [
# Rust toolchain and tools
rustToolchain
wasm-pack
# Build tools
pkg-config
# Compression and utilities
gzip
curl
wget
# Development tools
git
jq
# Cross-compilation support
gcc
];
buildInputs = with pkgs; [
# System libraries that might be needed
openssl
sqlite
] ++ lib.optionals stdenv.isDarwin [
# macOS specific dependencies
darwin.apple_sdk.frameworks.Security
darwin.apple_sdk.frameworks.SystemConfiguration
];
# Environment variables
commonEnvVars = {
CARGO_TERM_COLOR = "always";
RUST_BACKTRACE = "1";
# Cross-compilation environment
CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER = "${pkgs.pkgsStatic.stdenv.cc}/bin/cc";
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER = "${pkgs.pkgsCross.aarch64-multiplatform.stdenv.cc}/bin/aarch64-unknown-linux-gnu-gcc";
CARGO_TARGET_ARMV7_UNKNOWN_LINUX_MUSLEABIHF_LINKER = "${pkgs.pkgsCross.armv7l-hf-multiplatform.stdenv.cc}/bin/arm-unknown-linux-gnueabihf-gcc";
};
in
{
# Development shells
devShells = {
default = pkgs.mkShell ({
inherit nativeBuildInputs buildInputs;
shellHook = ''
echo "🔐 LLDAP Development Environment"
echo "==============================================="
echo "Rust version: ${rustVersion}"
echo "Standard cargo commands available:"
echo " cargo build --workspace - Build the workspace"
echo " cargo test --workspace - Run tests"
echo " cargo clippy --tests --workspace -- -D warnings - Run linting"
echo " cargo fmt --check --all - Check formatting"
echo " ./app/build.sh - Build frontend WASM"
echo " ./export_schema.sh - Export GraphQL schema"
echo "==============================================="
echo ""
# Ensure wasm-pack is available
if ! command -v wasm-pack &> /dev/null; then
echo " wasm-pack not found in PATH"
fi
# Check if we're in the right directory
if [[ "$(git rev-parse --show-toplevel 2>/dev/null)" == "$PWD" ]]; then
echo " Run this from the project root directory"
fi
'';
} // commonEnvVars);
# Minimal shell for CI-like environment
ci = pkgs.mkShell ({
inherit nativeBuildInputs buildInputs;
shellHook = ''
echo "🤖 LLDAP CI Environment"
echo "Running with Rust ${rustVersion}"
'';
} // commonEnvVars);
};
# Package outputs (optional - for building with Nix)
packages = {
default = craneLib.buildPackage {
src = craneLib.cleanCargoSource (craneLib.path ./.);
inherit nativeBuildInputs buildInputs;
# Build only the server by default
cargoExtraArgs = "-p lldap";
# Skip tests in the package build
doCheck = false;
meta = with pkgs.lib; {
description = "Light LDAP implementation for authentication";
homepage = "https://github.com/lldap/lldap";
license = licenses.gpl3Only;
maintainers = with maintainers; [ ];
platforms = platforms.unix;
};
};
};
# Formatter for the flake itself
formatter = pkgs.nixpkgs-fmt;
# Apps for running via `nix run`
apps = {
default = flake-utils.lib.mkApp {
drv = self.packages.${system}.default;
};
};
});
}
+1
View File
@@ -7,6 +7,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
anyhow = "*"
+2
View File
@@ -0,0 +1,2 @@
[toolchain]
channel = "1.89.0"
+4 -2
View File
@@ -605,6 +605,7 @@ main() {
local group_schema_files=()
local file=''
shopt -s nullglob
[[ -d "$USER_CONFIGS_DIR" ]] && for file in "${USER_CONFIGS_DIR}"/*.json; do
user_config_files+=("$file")
done
@@ -617,6 +618,7 @@ main() {
[[ -d "$GROUP_SCHEMAS_DIR" ]] && for file in "${GROUP_SCHEMAS_DIR}"/*.json; do
group_schema_files+=("$file")
done
shopt -u nullglob
if ! check_configs_validity "${group_config_files[@]}" "${user_config_files[@]}" "${group_schema_files[@]}" "${user_schema_files[@]}"; then
exit 1
@@ -710,9 +712,9 @@ main() {
redundant_users="$(printf '%s' "$redundant_users" | jq --compact-output --arg id "$id" '. - [$id]')"
if [[ "$password_file" != 'null' ]] && [[ "$password_file" != '""' ]]; then
LLDAP_USER_PASSWORD="$(cat $password_file)" "$LLDAP_SET_PASSWORD_PATH" --base-url "$LLDAP_URL" --token "$TOKEN" --username "$id"
"$LLDAP_SET_PASSWORD_PATH" --base-url "$LLDAP_URL" --token "$TOKEN" --username "$id" --password "$(cat $password_file)"
elif [[ "$password" != 'null' ]] && [[ "$password" != '""' ]]; then
LLDAP_USER_PASSWORD="$password" "$LLDAP_SET_PASSWORD_PATH" --base-url "$LLDAP_URL" --token "$TOKEN" --username "$id"
"$LLDAP_SET_PASSWORD_PATH" --base-url "$LLDAP_URL" --token "$TOKEN" --username "$id" --password "$password"
fi
# Process custom attributes
+2 -1
View File
@@ -9,6 +9,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
[dependencies]
actix = "0.13"
@@ -136,7 +137,7 @@ features = ["full"]
version = "1.25"
[dependencies.uuid]
features = ["v1", "v3"]
features = ["v1", "v3", "v4"]
version = "1"
[dependencies.tracing-forest]
+3
View File
@@ -35,6 +35,7 @@ use std::{
};
use time::ext::NumericalDuration;
use tracing::{debug, info, instrument, warn};
use uuid::Uuid;
type Token<S> = jwt::Token<jwt::Header, JWTClaims, S>;
type SignedToken = Token<jwt::token::Signed>;
@@ -56,6 +57,7 @@ async fn create_jwt<Handler: TcpBackendHandler>(
let claims = JWTClaims {
exp: Utc::now() + chrono::Duration::days(1),
iat: Utc::now(),
jti: Uuid::new_v4(),
user: user.to_string(),
groups: groups
.into_iter()
@@ -189,6 +191,7 @@ where
user.display_name
.as_deref()
.unwrap_or_else(|| user.user_id.as_str()),
user.user_id.as_str(),
user.email.as_str(),
&token,
&data.server_url,
+115 -101
View File
@@ -5,9 +5,9 @@ use crate::{
},
database_string::DatabaseUrl,
};
use anyhow::{Context, Result, bail};
use anyhow::{Context, Result, anyhow, bail};
use figment::{
Figment,
Figment, Provider,
providers::{Env, Format, Serialized, Toml},
};
use figment_file_provider_adapter::FileAdapter;
@@ -416,37 +416,36 @@ impl ConfigOverrider for RunOpts {
fn override_config(&self, config: &mut Configuration) {
self.general_config.override_config(config);
if let Some(path) = self.server_key_file.as_ref() {
config.key_file = path.to_string();
}
self.server_key_file
.as_ref()
.inspect(|path| config.key_file = path.to_string());
if let Some(seed) = self.server_key_seed.as_ref() {
config.key_seed = Some(SecUtf8::from(seed));
}
self.server_key_seed
.as_ref()
.inspect(|seed| config.key_seed = Some(SecUtf8::from(seed.as_str())));
if let Some(port) = self.ldap_port {
config.ldap_port = port;
}
self.ldap_port.inspect(|&port| config.ldap_port = port);
if let Some(port) = self.http_port {
config.http_port = port;
}
self.http_port.inspect(|&port| config.http_port = port);
if let Some(url) = self.http_url.as_ref() {
config.http_url = HttpUrl(url.clone());
}
self.http_url
.as_ref()
.inspect(|&url| config.http_url = HttpUrl(url.clone()));
if let Some(database_url) = self.database_url.as_ref() {
config.database_url = database_url.clone();
}
self.database_url
.as_ref()
.inspect(|&database_url| config.database_url = database_url.clone());
if let Some(force_ldap_user_pass_reset) = self.force_ldap_user_pass_reset {
config.force_ldap_user_pass_reset = force_ldap_user_pass_reset;
}
self.force_ldap_user_pass_reset
.inspect(|&force_ldap_user_pass_reset| {
config.force_ldap_user_pass_reset = force_ldap_user_pass_reset;
});
self.force_update_private_key
.inspect(|&force_update_private_key| {
config.force_update_private_key = force_update_private_key;
});
if let Some(force_update_private_key) = self.force_update_private_key {
config.force_update_private_key = force_update_private_key;
}
self.smtp_opts.override_config(config);
self.ldaps_opts.override_config(config);
}
@@ -461,18 +460,19 @@ impl ConfigOverrider for TestEmailOpts {
impl ConfigOverrider for LdapsOpts {
fn override_config(&self, config: &mut Configuration) {
if let Some(enabled) = self.ldaps_enabled {
config.ldaps_options.enabled = enabled;
}
if let Some(port) = self.ldaps_port {
config.ldaps_options.port = port;
}
if let Some(path) = self.ldaps_cert_file.as_ref() {
config.ldaps_options.cert_file.clone_from(path);
}
if let Some(path) = self.ldaps_key_file.as_ref() {
config.ldaps_options.key_file.clone_from(path);
}
self.ldaps_enabled
.inspect(|&enabled| config.ldaps_options.enabled = enabled);
self.ldaps_port
.inspect(|&port| config.ldaps_options.port = port);
self.ldaps_cert_file
.as_ref()
.inspect(|path| config.ldaps_options.cert_file.clone_from(path));
self.ldaps_key_file
.as_ref()
.inspect(|path| config.ldaps_options.key_file.clone_from(path));
}
}
@@ -486,33 +486,40 @@ impl ConfigOverrider for GeneralConfigOpts {
impl ConfigOverrider for SmtpOpts {
fn override_config(&self, config: &mut Configuration) {
if let Some(from) = &self.smtp_from {
config.smtp_options.from = Some(Mailbox(from.clone()));
}
if let Some(reply_to) = &self.smtp_reply_to {
config.smtp_options.reply_to = Some(Mailbox(reply_to.clone()));
}
if let Some(server) = &self.smtp_server {
config.smtp_options.server.clone_from(server);
}
if let Some(port) = self.smtp_port {
config.smtp_options.port = port;
}
if let Some(user) = &self.smtp_user {
config.smtp_options.user.clone_from(user);
}
if let Some(password) = &self.smtp_password {
config.smtp_options.password = SecUtf8::from(password.clone());
}
if let Some(smtp_encryption) = &self.smtp_encryption {
self.smtp_from
.as_ref()
.inspect(|&from| config.smtp_options.from = Some(Mailbox(from.clone())));
self.smtp_reply_to
.as_ref()
.inspect(|&reply_to| config.smtp_options.reply_to = Some(Mailbox(reply_to.clone())));
self.smtp_server
.as_ref()
.inspect(|server| config.smtp_options.server.clone_from(server));
self.smtp_port
.inspect(|&port| config.smtp_options.port = port);
self.smtp_user
.as_ref()
.inspect(|user| config.smtp_options.user.clone_from(user));
self.smtp_password
.as_ref()
.inspect(|&password| config.smtp_options.password = SecUtf8::from(password.clone()));
self.smtp_encryption.as_ref().inspect(|&smtp_encryption| {
config.smtp_options.smtp_encryption = smtp_encryption.clone();
}
if let Some(tls_required) = self.smtp_tls_required {
config.smtp_options.tls_required = Some(tls_required);
}
if let Some(enable_password_reset) = self.smtp_enable_password_reset {
config.smtp_options.enable_password_reset = enable_password_reset;
}
});
self.smtp_tls_required
.inspect(|&tls_required| config.smtp_options.tls_required = Some(tls_required));
self.smtp_enable_password_reset
.inspect(|&enable_password_reset| {
config.smtp_options.enable_password_reset = enable_password_reset;
});
}
}
@@ -556,6 +563,45 @@ fn expected_keys(dict: &figment::value::Dict) -> HashSet<String> {
keys
}
fn check_for_unexpected_env_variables<P: Provider>(env_variable_provider: P) {
use figment::Profile;
let expected_keys = expected_keys(
&Figment::from(Serialized::defaults(
ConfigurationBuilder::default().private_build().unwrap(),
))
.data()
.unwrap()[&Profile::default()],
);
extract_keys(&env_variable_provider.data().unwrap()[&Profile::default()])
.iter()
.filter(|k| !expected_keys.contains(k.as_str()))
.for_each(|k| {
eprintln!("WARNING: Unknown environment variable: {k}");
});
}
fn generate_jwt_sample_error() -> String {
use rand::{Rng, seq::SliceRandom};
struct Symbols;
impl rand::distributions::Distribution<char> for Symbols {
fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> char {
*b"0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+,-./:;<=>?_~!@#$%^&*()[]{}:;".choose(rng).unwrap() as char
}
}
format!(
"The JWT secret must be initialized to a random string, preferably at least 32 characters long. \
Either set the `jwt_secret` config value or the `LLDAP_JWT_SECRET` environment variable. \
You can generate the value by running\n\
LC_ALL=C tr -dc 'A-Za-z0-9!#%&'\\''()*+,-./:;<=>?@[\\]^_{{|}}~' </dev/urandom | head -c 32; echo ''\n\
or you can use this random value: {}",
rand::thread_rng()
.sample_iter(&Symbols)
.take(32)
.collect::<String>()
)
}
pub fn init<C>(overrides: C) -> Result<Configuration>
where
C: TopLevelCommandOpts + ConfigOverrider,
@@ -581,22 +627,7 @@ where
if config.verbose {
println!("Configuration: {:#?}", &config);
}
{
use figment::{Profile, Provider};
let expected_keys = expected_keys(
&Figment::from(Serialized::defaults(
ConfigurationBuilder::default().private_build().unwrap(),
))
.data()
.unwrap()[&Profile::default()],
);
extract_keys(&env_variable_provider().data().unwrap()[&Profile::default()])
.iter()
.filter(|k| !expected_keys.contains(k.as_str()))
.for_each(|k| {
eprintln!("WARNING: Unknown environment variable: LLDAP_{k}");
});
}
check_for_unexpected_env_variables(env_variable_provider());
config.server_setup = Some(get_server_setup(
&config.key_file,
config
@@ -606,27 +637,10 @@ where
.unwrap_or_default(),
figment_config,
)?);
if config.jwt_secret.is_none() {
use rand::{Rng, seq::SliceRandom};
struct Symbols;
impl rand::prelude::Distribution<char> for Symbols {
fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> char {
*b"01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz+,-./:;<=>?_~!@#$%^&*()[]{}:;".choose(rng).unwrap() as char
}
}
bail!(
"The JWT secret must be initialized to a random string, preferably at least 32 characters long. \
Either set the `jwt_secret` config value or the `LLDAP_JWT_SECRET` environment variable. \
You can generate the value by running\n\
LC_ALL=C tr -dc 'A-Za-z0-9!#%&'\\''()*+,-./:;<=>?@[\\]^_{{|}}~' </dev/urandom | head -c 32; echo ''\n\
or you can use this random value: {}",
rand::thread_rng()
.sample_iter(&Symbols)
.take(32)
.collect::<String>()
);
}
config
.jwt_secret
.as_ref()
.ok_or_else(|| anyhow!("{}", generate_jwt_sample_error()))?;
if config.smtp_options.tls_required.is_some() {
println!(
"DEPRECATED: smtp_options.tls_required field is deprecated, it never did anything. You can replace it with smtp_options.smtp_encryption."
+20 -32
View File
@@ -5,9 +5,8 @@ use actix_service::{ServiceFactoryExt, fn_service};
use anyhow::{Context, Result, anyhow};
use ldap3_proto::{LdapCodec, control::LdapControl, proto::LdapMsg, proto::LdapOp};
use lldap_access_control::AccessControlledBackendHandler;
use lldap_domain::types::AttributeName;
use lldap_domain_handlers::handler::{BackendHandler, LoginHandler};
use lldap_ldap::LdapHandler;
use lldap_ldap::{LdapHandler, LdapInfo};
use lldap_opaque_handler::OpaqueHandler;
use rustls::PrivateKey;
use tokio_rustls::TlsAcceptor as RustlsTlsAcceptor;
@@ -71,9 +70,7 @@ where
async fn handle_ldap_stream<Stream, Backend>(
stream: Stream,
backend_handler: Backend,
ldap_base_dn: String,
ignored_user_attributes: Vec<AttributeName>,
ignored_group_attributes: Vec<AttributeName>,
ldap_info: &'static LdapInfo,
) -> Result<Stream>
where
Backend: BackendHandler + LoginHandler + OpaqueHandler + 'static,
@@ -88,9 +85,7 @@ where
let session_uuid = Uuid::new_v4();
let mut session = LdapHandler::new(
AccessControlledBackendHandler::new(backend_handler),
ldap_base_dn,
ignored_user_attributes,
ignored_group_attributes,
ldap_info,
session_uuid,
);
@@ -170,9 +165,19 @@ where
{
let context = (
backend_handler,
config.ldap_base_dn.clone(),
config.ignored_user_attributes.clone(),
config.ignored_group_attributes.clone(),
Box::leak(Box::new(
LdapInfo::new(
&config.ldap_base_dn,
config.ignored_user_attributes.clone(),
config.ignored_group_attributes.clone(),
)
.with_context(|| {
format!(
"Invalid value for ldap_base_dn in configuration: {}",
&config.ldap_base_dn
)
})?,
)) as &'static LdapInfo,
);
let context_for_tls = context.clone();
@@ -182,15 +187,8 @@ where
fn_service(move |stream: TcpStream| {
let context = context.clone();
async move {
let (handler, base_dn, ignored_user_attributes, ignored_group_attributes) = context;
handle_ldap_stream(
stream,
handler,
base_dn,
ignored_user_attributes,
ignored_group_attributes,
)
.await
let (handler, ldap_info) = context;
handle_ldap_stream(stream, handler, ldap_info).await
}
})
.map_err(|err: anyhow::Error| error!("[LDAP] Service Error: {:#}", err))
@@ -211,19 +209,9 @@ where
fn_service(move |stream: TcpStream| {
let tls_context = tls_context.clone();
async move {
let (
(handler, base_dn, ignored_user_attributes, ignored_group_attributes),
tls_acceptor,
) = tls_context;
let ((handler, ldap_info), tls_acceptor) = tls_context;
let tls_stream = tls_acceptor.accept(stream).await?;
handle_ldap_stream(
tls_stream,
handler,
base_dn,
ignored_user_attributes,
ignored_group_attributes,
)
.await
handle_ldap_stream(tls_stream, handler, ldap_info).await
}
})
.map_err(|err: anyhow::Error| error!("[LDAPS] Service Error: {:#}", err))
+5 -1
View File
@@ -80,6 +80,7 @@ async fn send_email(
}
pub async fn send_password_reset_email(
display_name: &str,
username: &str,
to: &str,
token: &str,
@@ -93,7 +94,10 @@ pub async fn send_password_reset_email(
.unwrap()
.extend(["reset-password", "step2", token]);
let body = format!(
"Hello {username},
"Hello {display_name},
Your username is: \"{username}\"
This email has been sent to you in order to validate your identity.
If you did not initiate the process your credentials might have been
compromised. You should reset your password and contact an administrator.
+1
View File
@@ -1,3 +1,4 @@
#![allow(dead_code)]
use crate::common::env;
use anyhow::{Context, Result, anyhow};
use graphql_client::GraphQLQuery;
+1
View File
@@ -7,6 +7,7 @@ authors.workspace = true
homepage.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html