1
0
mirror of https://github.com/ellmau/adf-obdd.git synced 2025-12-20 09:39:38 +01:00

Compare commits

..

62 Commits
0.2.4 ... main

Author SHA1 Message Date
Lukas Gerlach
ff12d4fede
Add naive af support to webserver (#195)
* Update flake

* Remove cargo-kcov

* Make AdfParser publically accessible

* Fix mongodb version to 6

* Add naive AF support in Web

* Add missing doc strings

* Remove unused import

* Remove TODO comment

* Update DevSkim Action

* Upgrade flake

* Apply clippy suggestions

---------

Co-authored-by: monsterkrampe <monsterkrampe@users.noreply.github.com>
2025-06-30 08:51:23 +02:00
Lukas Gerlach
c9631346ab
Update Data Protection Commissioner address (#194)
Co-authored-by: monsterkrampe <monsterkrampe@users.noreply.github.com>
2025-05-22 10:55:50 +02:00
Stefan Ellmauthaler
ca6c94f2e5 Update versions and fix formatting and clippy issues
flake.lock: Update

Flake lock file updates:

• Updated input 'flake-utils':
    'github:gytis-ivaskevicius/flake-utils-plus/bfc53579db89de750b25b0c5e7af299e0c06d7d3?narHash=sha256-YkbRa/1wQWdWkVJ01JvV%2B75KIdM37UErqKgTf0L54Fk%3D' (2023-10-03)
  → 'github:gytis-ivaskevicius/flake-utils-plus/afcb15b845e74ac5e998358709b2b5fe42a948d1?narHash=sha256-4WNeriUToshQ/L5J%2BdTSWC5OJIwT39SEP7V7oylndi8%3D' (2025-02-03)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/2037779e018ebc2d381001a891e2a793fce7a74f?narHash=sha256-5bD6iSPDgVTLly2gy2oJVwzuyuFZOz2p4qt8c8UoYIE%3D' (2024-01-08)
  → 'github:oxalica/rust-overlay/f3cd1e0feb994188fe3ad9a5c3ab021ed433b8c8?narHash=sha256-HUtFcF4NLwvu7CAowWgqCHXVkNj0EOc/W6Ism4biV6I%3D' (2025-03-13)
• Removed input 'rust-overlay/flake-utils'

Signed-off-by: Stefan Ellmauthaler <ellmauthaler@echo-intelligence.at>
2025-03-13 18:30:16 +01:00
Stefan Ellmauthaler
54e57e3820 Update GDPR/legal document
Signed-off-by: Stefan Ellmauthaler <ellmauthaler@echo-intelligence.at>
2025-03-13 18:30:16 +01:00
f601d473cc Fix bug introduced by 530fb5c 2024-01-08 12:56:22 +01:00
530fb5c4e6 Fix clippy warnings 2024-01-08 12:01:10 +01:00
a85b0cb129 flake.lock: Update
Flake lock file updates:

• Updated input 'flake-utils':
    'github:gytis-ivaskevicius/flake-utils-plus/2bf0f91643c2e5ae38c1b26893ac2927ac9bd82a' (2022-07-07)
  → 'github:gytis-ivaskevicius/flake-utils-plus/bfc53579db89de750b25b0c5e7af299e0c06d7d3' (2023-10-03)
• Updated input 'flake-utils/flake-utils':
    'github:numtide/flake-utils/919d646de7be200f3bf08cb76ae1f09402b6f9b4' (2023-07-11)
  → 'github:numtide/flake-utils/ff7b65b44d01cf9ba6a71320833626af21126384' (2023-09-12)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/bd836ac5e5a7358dea73cb74a013ca32864ccb86' (2023-08-01)
  → 'github:NixOS/nixpkgs/70bdadeb94ffc8806c0570eb5c2695ad29f0e421' (2024-01-03)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/99df4908445be37ddb2d332580365fce512a7dcf' (2023-08-03)
  → 'github:oxalica/rust-overlay/2037779e018ebc2d381001a891e2a793fce7a74f' (2024-01-08)
2024-01-08 12:01:10 +01:00
Stefan Ellmauthaler
627a1a1810
Add flake app and packages for adf-bdd (#155)
* Add flake packages for adf-bdd
2023-08-04 16:00:39 +02:00
dependabot[bot]
fc0042fcd1
Bump serde from 1.0.180 to 1.0.181 (#166) 2023-08-04 09:28:10 +00:00
dependabot[bot]
bc403d10cb
Merge pull request #165 from ellmau/dependabot/npm_and_yarn/frontend/semver-6.3.1 2023-08-04 09:20:41 +00:00
dependabot[bot]
34d3af0c0d
Bump semver from 6.3.0 to 6.3.1 in /frontend
Bumps [semver](https://github.com/npm/node-semver) from 6.3.0 to 6.3.1.
- [Release notes](https://github.com/npm/node-semver/releases)
- [Changelog](https://github.com/npm/node-semver/blob/v6.3.1/CHANGELOG.md)
- [Commits](https://github.com/npm/node-semver/compare/v6.3.0...v6.3.1)

---
updated-dependencies:
- dependency-name: semver
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-08-04 09:11:03 +00:00
monsterkrampe
8cafca22eb Fix typo in info text 2023-08-04 11:09:08 +02:00
monsterkrampe
3979f77d03 Upgrade dependencies 2023-08-04 10:59:27 +02:00
eeef4729b6 flake.lock: Update
Flake lock file updates:

• Updated input 'flake-utils':
    'github:numtide/flake-utils/93a2b84fc4b70d9e089d029deacc3583435c2ed6' (2023-03-15)
  → 'github:numtide/flake-utils/a1720a10a6cfe8234c0e93907ffe81be440f4cef' (2023-05-31)
• Added input 'flake-utils/systems':
    'github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e' (2023-04-09)
• Updated input 'nixpkgs-unstable':
    'github:NixOS/nixpkgs/4bb072f0a8b267613c127684e099a70e1f6ff106' (2023-03-27)
  → 'github:NixOS/nixpkgs/4729ffac6fd12e26e5a8de002781ffc49b0e94b7' (2023-06-06)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/c8d8d05b8100d451243b614d950fa3f966c1fcc2' (2023-03-29)
  → 'github:oxalica/rust-overlay/b4b71458b92294e8f1c3a112d972e3cff8a2ab71' (2023-06-08)
2023-06-08 09:44:30 +02:00
b68c0b3d3f NixOs 23.05 2023-06-08 09:44:30 +02:00
dependabot[bot]
6c0da967b9 Bump clap from 4.2.7 to 4.3.0
Bumps [clap](https://github.com/clap-rs/clap) from 4.2.7 to 4.3.0.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.2.7...clap_complete-v4.3.0)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-08 09:44:30 +02:00
dependabot[bot]
716d1194a6 Bump predicates from 2.1.5 to 3.0.3
Bumps [predicates](https://github.com/assert-rs/predicates-rs) from 2.1.5 to 3.0.3.
- [Changelog](https://github.com/assert-rs/predicates-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/assert-rs/predicates-rs/compare/v2.1.5...v3.0.3)

---
updated-dependencies:
- dependency-name: predicates
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-08 09:44:30 +02:00
dependabot[bot]
3445ae343f Bump serde from 1.0.162 to 1.0.163
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.162 to 1.0.163.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.162...v1.0.163)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-06-08 09:44:30 +02:00
Lukas Gerlach
26e978ca47
Update Cargo.lock (#148)
* Update Cargo.lock

* Update biodivine-lib-bdd

---------

Co-authored-by: monsterkrampe <monsterkrampe@users.noreply.github.com>
Co-authored-by: Stefan Ellmauthaler <stefan.ellmauthaler@tu-dresden.de>
2023-05-05 15:37:34 +02:00
Lukas Gerlach
dd8524e02e
Update dependencies in yarn.lock (#146) 2023-05-05 11:07:50 +02:00
Stefan Ellmauthaler
c9278cf5ce
milestone/frontend (#63)
* Introduce separate server package

* Implement basic visualization of solve response

* Make fetch endpoint depend on environment

* Introduce features flag for localhost cors support

* Serve static files from './assets' directory

* Add Dockerfile as example for server with frontend

* Support multiple solving strategies

* Support stable model semantics with nogoods

* Introduce custom node type for nicer layout

* Support more options and multiple models

* Use standard example for adfs on the frontend

* Use unoptimised hybrid step for better presentation

* Upgrade frontend dependencies

* Animate graph changes

* Experiment with timeout on API endpoints

* Relax CORS restrictions for local development

* Add API for adding/deleting users; login; logout

* Add API for uploading and solving adf problems

* Add API for getting and updating user

* Return early for parse and solve; Add Adf GET

* Add Delete and Index endpoints for ADFs

* Add basic UI for user endpoints

* Enforce username and password to be set on login

* Show colored snackbars

* Allow file upload for ADF; fix some server bugs

* Implement ADF Add Form and Overview

* Add Detail View for ADF problems

* Add docker-compose file for mongodb (development)

* Add mongodb (DEV) data directory to dockerignore

* Let unknown routes be handled by frontend

* Add legal information page to frontend

* Change G6 Graph layout slightly

* Add missing doc comments to lib

* Update legal information regarding cookies

* Add project logos to frontend

* Add help texts to frontend

* Move DoubleLabeledGraph from lib to server

* Give example for custom Adf datastructure in docs

* Update README and Project Website

* Update devskim.yml

* Add READMEs for frontend and server

---------

Co-authored-by: monsterkrampe <monsterkrampe@users.noreply.github.com>
2023-05-04 17:10:38 +02:00
Stefan Ellmauthaler
73986437f8
Dependabot/bump (#140)
* Bump env_logger from 0.9.1 to 0.10.0

Bumps [env_logger](https://github.com/rust-cli/env_logger) from 0.9.1 to 0.10.0.
- [Release notes](https://github.com/rust-cli/env_logger/releases)
- [Changelog](https://github.com/rust-cli/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rust-cli/env_logger/compare/v0.9.1...v0.10.0)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump biodivine-lib-bdd from 0.4.1 to 0.4.2

Bumps [biodivine-lib-bdd](https://github.com/sybila/biodivine-lib-bdd) from 0.4.1 to 0.4.2.
- [Release notes](https://github.com/sybila/biodivine-lib-bdd/releases)
- [Commits](https://github.com/sybila/biodivine-lib-bdd/compare/v0.4.1...v0.4.2)

---
updated-dependencies:
- dependency-name: biodivine-lib-bdd
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump clap from 4.0.32 to 4.1.4

Bumps [clap](https://github.com/clap-rs/clap) from 4.0.32 to 4.1.4.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.0.32...v4.1.4)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump nom from 7.1.1 to 7.1.3

Bumps [nom](https://github.com/Geal/nom) from 7.1.1 to 7.1.3.
- [Release notes](https://github.com/Geal/nom/releases)
- [Changelog](https://github.com/rust-bakery/nom/blob/7.1.3/CHANGELOG.md)
- [Commits](https://github.com/Geal/nom/compare/7.1.1...7.1.3)

---
updated-dependencies:
- dependency-name: nom
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-02-17 14:03:33 +01:00
Stefan Ellmauthaler
24c6788c9f
Update Readme files to fix breaking badge changes
* Update README.md
* Update bin/README.md
* Update lib/README.md
2023-01-24 10:49:34 +01:00
Stefan Ellmauthaler
11083098a2
Update/depbot flake (#133)
* Bump predicates from 2.1.1 to 2.1.5

Bumps [predicates](https://github.com/assert-rs/predicates-rs) from 2.1.1 to 2.1.5.
- [Release notes](https://github.com/assert-rs/predicates-rs/releases)
- [Changelog](https://github.com/assert-rs/predicates-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/assert-rs/predicates-rs/compare/v2.1.1...v2.1.5)

---
updated-dependencies:
- dependency-name: predicates
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump serde from 1.0.147 to 1.0.152

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.147 to 1.0.152.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.147...v1.0.152)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump serde_json from 1.0.87 to 1.0.91

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.87 to 1.0.91.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.87...v1.0.91)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump clap from 4.0.18 to 4.0.32

Bumps [clap](https://github.com/clap-rs/clap) from 4.0.18 to 4.0.32.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.0.18...v4.0.32)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump assert_fs from 1.0.7 to 1.0.10

Bumps [assert_fs](https://github.com/assert-rs/assert_fs) from 1.0.7 to 1.0.10.
- [Release notes](https://github.com/assert-rs/assert_fs/releases)
- [Changelog](https://github.com/assert-rs/assert_fs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/assert-rs/assert_fs/compare/v1.0.7...v1.0.10)

---
updated-dependencies:
- dependency-name: assert_fs
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update to NixOS 22.11

* flake.lock: Update

Flake lock file updates:

• Updated input 'nixpkgs-unstable':
    'github:NixOS/nixpkgs/fc07622617a373a742ed96d4dd536849d4bc1ec6' (2022-11-13)
  → 'github:NixOS/nixpkgs/677ed08a50931e38382dbef01cba08a8f7eac8f6' (2022-12-29)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/2342f70f7257046effc031333c4cfdea66c91d82' (2022-11-15)
  → 'github:oxalica/rust-overlay/c8bf9c162bb3f734cf357846e995eb70b94e2bcd' (2023-01-02)

* Fix clippy issue in testcase-builder

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-02 10:07:24 +01:00
Stefan Ellmauthaler
68acf25a39
Dependabot/bump (#121)
* Bump biodivine-lib-bdd from 0.4.0 to 0.4.1

Bumps [biodivine-lib-bdd](https://github.com/sybila/biodivine-lib-bdd) from 0.4.0 to 0.4.1.
- [Release notes](https://github.com/sybila/biodivine-lib-bdd/releases)
- [Commits](https://github.com/sybila/biodivine-lib-bdd/compare/v0.4.0...v0.4.1)

---
updated-dependencies:
- dependency-name: biodivine-lib-bdd
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump serde from 1.0.145 to 1.0.147

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.145 to 1.0.147.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.145...v1.0.147)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump assert_cmd from 2.0.4 to 2.0.5

Bumps [assert_cmd](https://github.com/assert-rs/assert_cmd) from 2.0.4 to 2.0.5.
- [Release notes](https://github.com/assert-rs/assert_cmd/releases)
- [Changelog](https://github.com/assert-rs/assert_cmd/blob/master/CHANGELOG.md)
- [Commits](https://github.com/assert-rs/assert_cmd/compare/v2.0.4...v2.0.5)

---
updated-dependencies:
- dependency-name: assert_cmd
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump clap from 4.0.9 to 4.0.18

Bumps [clap](https://github.com/clap-rs/clap) from 4.0.9 to 4.0.18.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v4.0.9...v4.0.18)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump serde_json from 1.0.85 to 1.0.87

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.85 to 1.0.87.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.85...v1.0.87)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* flake.lock: Update

Flake lock file updates:

• Updated input 'flake-utils':
    'github:numtide/flake-utils/c0e246b9b83f637f4681389ecabcb2681b4f3af0' (2022-08-07)
  → 'github:numtide/flake-utils/5aed5285a952e0b949eb3ba02c12fa4fcfef535f' (2022-11-02)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/81a3237b64e67b66901c735654017e75f0c50943' (2022-10-03)
  → 'github:NixOS/nixpkgs/16f4e04658c2ab10114545af2f39db17d51bd1bd' (2022-11-14)
• Updated input 'nixpkgs-unstable':
    'github:NixOS/nixpkgs/fd54651f5ffb4a36e8463e0c327a78442b26cbe7' (2022-10-03)
  → 'github:NixOS/nixpkgs/fc07622617a373a742ed96d4dd536849d4bc1ec6' (2022-11-13)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/148815c92641976b798efb2805a50991de4bac7f' (2022-10-04)
  → 'github:oxalica/rust-overlay/2342f70f7257046effc031333c4cfdea66c91d82' (2022-11-15)

* Fix new clippy comments due to rust 1.65

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-11-15 09:41:52 +01:00
dependabot[bot]
68cbab5b02
Bump clap from 4.0.7 to 4.0.9 (#114) 2022-10-04 14:53:59 +00:00
Stefan Ellmauthaler
6d687a3839
Dependabot/bump (#113)
* Bump clap from 3.2.20 to 4.0.7

Bumps [clap](https://github.com/clap-rs/clap) from 3.2.20 to 4.0.7.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v3.2.20...v4.0.7)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump serde from 1.0.144 to 1.0.145

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.144 to 1.0.145.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.144...v1.0.145)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump env_logger from 0.9.0 to 0.9.1

Bumps [env_logger](https://github.com/env-logger-rs/env_logger) from 0.9.0 to 0.9.1.
- [Release notes](https://github.com/env-logger-rs/env_logger/releases)
- [Changelog](https://github.com/env-logger-rs/env_logger/blob/main/CHANGELOG.md)
- [Commits](https://github.com/env-logger-rs/env_logger/compare/v0.9.0...v0.9.1)

---
updated-dependencies:
- dependency-name: env_logger
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump roaring from 0.9.0 to 0.10.1

Bumps [roaring](https://github.com/RoaringBitmap/roaring-rs) from 0.9.0 to 0.10.1.
- [Release notes](https://github.com/RoaringBitmap/roaring-rs/releases)
- [Commits](https://github.com/RoaringBitmap/roaring-rs/compare/v0.9.0...v0.10.1)

---
updated-dependencies:
- dependency-name: roaring
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* flake.lock: Update

Flake lock file updates:

• Updated input 'flake-utils':
    'github:numtide/flake-utils/7e2a3b3dfd9af950a856d66b0a7d01e3c18aa249' (2022-07-04)
  → 'github:numtide/flake-utils/c0e246b9b83f637f4681389ecabcb2681b4f3af0' (2022-08-07)
• Updated input 'gitignoresrc':
    'github:hercules-ci/gitignore.nix/f2ea0f8ff1bce948ccb6b893d15d5ea3efaf1364' (2022-07-21)
  → 'github:hercules-ci/gitignore.nix/a20de23b925fd8264fd7fad6454652e142fd7f73' (2022-08-14)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/e43cf1748462c81202a32b26294e9f8eefcc3462' (2022-08-01)
  → 'github:NixOS/nixpkgs/81a3237b64e67b66901c735654017e75f0c50943' (2022-10-03)
• Updated input 'nixpkgs-unstable':
    'github:NixOS/nixpkgs/7b9be38c7250b22d829ab6effdee90d5e40c6e5c' (2022-07-30)
  → 'github:NixOS/nixpkgs/fd54651f5ffb4a36e8463e0c327a78442b26cbe7' (2022-10-03)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/9055cb4f33f062c0dd33aa7e3c89140da8f70057' (2022-08-02)
  → 'github:oxalica/rust-overlay/148815c92641976b798efb2805a50991de4bac7f' (2022-10-04)

* Update code to use clap version 4 and use then_some(..)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-10-04 16:47:46 +02:00
dependabot[bot]
d807089884
Bump clap from 3.2.19 to 3.2.20 (#108) 2022-09-06 10:29:53 +00:00
Stefan Ellmauthaler
683c12a525
Dependabot/2022sep06 merge (#107)
* Bump serde from 1.0.141 to 1.0.144

Bumps [serde](https://github.com/serde-rs/serde) from 1.0.141 to 1.0.144.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.141...v1.0.144)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump clap from 3.2.16 to 3.2.19

Bumps [clap](https://github.com/clap-rs/clap) from 3.2.16 to 3.2.19.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/v3.2.19/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v3.2.16...v3.2.19)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Bump serde_json from 1.0.82 to 1.0.85

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.82 to 1.0.85.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.82...v1.0.85)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-09-06 12:16:33 +02:00
Stefan Ellmauthaler
17a57938c7
Fix/restrict slow (#103)
* Fix missing cache usage in restrict algorithm
2022-08-23 15:17:51 +02:00
Stefan Ellmauthaler
bc21ecabc7
Add a channel to inform other threads about bdd updates (#95)
* Add a channel to inform other threads about bdd updates
* Redisign sender and receiver model for bdds
* Add options for both directions and allow to utilise none, one, or both at the same time.
2022-08-17 12:43:35 +02:00
Stefan Ellmauthaler
495f9c893f
Add a random heuristic, based on rand-crate (#96)
* Add a random heuristic, based on rand-crate
* Add cryptographically strong seed option
2022-08-16 23:13:27 +02:00
Stefan Ellmauthaler
ebbf8de684
Merge pull request #101 from ellmau/feature/issue_99_BDD-cache
Add cache to ite and restrict
2022-08-12 14:36:35 +02:00
7000c41a61
Add cache to ite and restrict 2022-08-12 13:55:48 +02:00
dependabot[bot]
8269fbac9d
Bump clap from 3.2.12 to 3.2.16 (#83) 2022-08-02 18:08:38 +00:00
dependabot[bot]
dccf11479f
Bump test-log from 0.2.10 to 0.2.11 (#84) 2022-08-02 18:03:41 +00:00
dependabot[bot]
fdf530f6e6
Bump serde from 1.0.137 to 1.0.141 (#87) 2022-08-02 17:59:23 +00:00
Stefan Ellmauthaler
0216a10895
Merge pull request #94 from ellmau/triage/1.62.1_upgrade
Restore rust 1.62.1 conformity
2022-08-02 19:58:27 +02:00
a0139c66e2
Restore rust 1.62.1 conformity 2022-08-02 19:55:03 +02:00
Stefan Ellmauthaler
d8492dc6d0
Merge pull request #92 from ellmau/triage/1.61_downgrade
Rust 1.61 conformity
2022-08-02 19:47:06 +02:00
c4a57bcb84
Rust 1.61 conformity 2022-08-02 19:43:15 +02:00
Stefan Ellmauthaler
fea31f1590
Update build.yml (#91) 2022-08-02 15:55:30 +02:00
Stefan Ellmauthaler
1596dc6818
Prepare files to publish the adf-bdd-bin package (#89)
* Prepare files to publish the adf-bdd-bin package
2022-08-02 15:09:48 +02:00
Stefan Ellmauthaler
d7e71e5da7
milestone/nogoods (#74)
* Add NoGood and NoGoodStore (#65)
* Update Flake to nix 22.05
* Add nogood-algorithm to the ADF
* Add public api for the nogood-learner
* Add direnv to gitignore
* Avoid a Box, support custom heuristics functions
* Add ng option with heu to binary
* Introduce a new flag to handle big instances (modelcount vs adhoccount)
Note that adhoccount without modelcount will not produce correct modelcounts if memoization is used
* Add new heuristic
* Add crossbeam-channel to represent an output-stream of stable models
Uses a crossbeam-channel to fill a Queue, which can be used safely
from outside the function.
This rework is done to also allow ad-hoc output of results in a
potentially multi-threaded setup.
* Added documentation on this new feature on the module-page
* Fix broken links in rust-doc
* Update Readme for lib to reflect the new NoGood API
* Add metadata to bin/Cargo.toml, add features
* added a benchmark feature, to easily compile benchmark-releases
* Fix facet count tests
* Add multithread-safe functionality for the dictionary/ordering
* Streamline a couple of API calls
* Expose more structs and methods to the public API
* Breaking some API (though nothing which is currently used in the binary)
* Simple version of gh pages
* Added more links and information to the landing page
* Fix badges in the app-doc
* Add two valued interpretation
Parameterised the stable-nogood algorithm to allow a variable
stability check function.
* Refactor nogood-algorithm name
* Update README.md and documentation (`docu` folder)
`README.md` on the `/` level is now presenting the same information
which is provided in `docs/index.md`
* Update main
- Update main functionality
- Update naming
* Fix cli-test
* Update Version to 0.3.0
Due to braking API changes and reaching a milestone, the version is
incremented to
0.3.0 (beta)
* Update Documentation navigation (#81)
* flake.lock: Update
Flake lock file updates:
• Updated input 'flake-utils':
    'github:numtide/flake-utils/1ed9fb1935d260de5fe1c2f7ee0ebaae17ed2fa1' (2022-05-30)
  → 'github:numtide/flake-utils/7e2a3b3dfd9af950a856d66b0a7d01e3c18aa249' (2022-07-04)
• Updated input 'gitignoresrc':
    'github:hercules-ci/gitignore.nix/bff2832ec341cf30acb3a4d3e2e7f1f7b590116a' (2022-03-05)
  → 'github:hercules-ci/gitignore.nix/f2ea0f8ff1bce948ccb6b893d15d5ea3efaf1364' (2022-07-21)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/8b538fcb329a7bc3d153962f17c509ee49166973' (2022-06-15)
  → 'github:NixOS/nixpkgs/e43cf1748462c81202a32b26294e9f8eefcc3462' (2022-08-01)
• Updated input 'nixpkgs-unstable':
    'github:NixOS/nixpkgs/b1957596ff1c7aa8c55c4512b7ad1c9672502e8e' (2022-06-15)
  → 'github:NixOS/nixpkgs/7b9be38c7250b22d829ab6effdee90d5e40c6e5c' (2022-07-30)
• Updated input 'rust-overlay':
    'github:oxalica/rust-overlay/9eea93067eff400846c36f57b7499df9ef428ba0' (2022-06-17)
  → 'github:oxalica/rust-overlay/9055cb4f33f062c0dd33aa7e3c89140da8f70057' (2022-08-02)
* Add type alias for NoGood
Add a type alias `Interpretation` for NoGood to reflect the duality
where an Interpretation might become a NoGood.
* Add documentation information about later revisions on VarContainer

Co-authored-by: Maximilian Marx <mmarx@wh2.tu-dresden.de>
2022-08-02 14:02:00 +02:00
Stefan Ellmauthaler
45700d7224
Create combine-prs.yml (#86)
* add a combine-prs action (see [here](https://github.com/hrvey/combine-prs-workflow) for further details) to handle dependabot updates with less merge-checks
2022-08-01 23:35:47 +02:00
Stefan Ellmauthaler
d4e31ee8c2
Update citation.cff
Add note in message about to appear papers in conferences and add first mention of comma system demo
2022-07-18 12:25:30 +02:00
Stefan Ellmauthaler
62fa2e8c94
Add citation.cff file
Add a first variant of the citation file - will be extended as soon as the "to appear" papers will be published.
2022-07-18 11:41:20 +02:00
dependabot[bot]
493a834388
Bump clap from 3.1.18 to 3.2.2 -> 3.2.12 (#78)
* Bump clap from 3.1.18 to 3.2.2

Bumps [clap](https://github.com/clap-rs/clap) from 3.1.18 to 3.2.2.
- [Release notes](https://github.com/clap-rs/clap/releases)
- [Changelog](https://github.com/clap-rs/clap/blob/master/CHANGELOG.md)
- [Commits](https://github.com/clap-rs/clap/compare/v3.1.18...clap_complete-v3.2.2)

---
updated-dependencies:
- dependency-name: clap
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Update version to 3.2.12

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Stefan Ellmauthaler <stefan.ellmauthaler@tu-dresden.de>
2022-07-18 10:58:45 +02:00
dependabot[bot]
b7902ef940
Merge pull request #77 from ellmau/dependabot/cargo/serde_json-1.0.82 2022-07-18 08:28:16 +00:00
dependabot[bot]
bdbebc137d
Bump serde_json from 1.0.81 to 1.0.82
Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.81 to 1.0.82.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.81...v1.0.82)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-07-01 13:09:56 +00:00
dependabot[bot]
c29db9a6aa
Bump clap from 3.1.14 to 3.1.18 (#68) 2022-06-13 12:11:45 +00:00
Stefan Ellmauthaler
78949b18e5
Bump flake and update to rust 1.61 (#67) 2022-05-24 11:13:54 +02:00
Stefan Ellmauthaler
e0eea73525
Use nix-envrc, Bump flake (#66)
Use nix-envrc instead of lorri for the virtual environment package management
2022-05-18 08:53:22 +02:00
Stefan Ellmauthaler
98753d696b
Add CI for develop branch (#62) 2022-05-04 16:12:50 +02:00
Stefan Ellmauthaler
6d09e71ba6
Add badges to readme (#61)
* Add badges to readme

* Format and reorder badges
2022-05-04 12:01:27 +02:00
dependabot[bot]
c951e78c86
Bump serde_json from 1.0.79 to 1.0.80 (#60) 2022-05-02 08:44:38 +00:00
dependabot[bot]
6e5df674a5
Bump serde from 1.0.136 to 1.0.137 (#58) 2022-05-02 08:40:09 +00:00
dependabot[bot]
1c8508eb66
Bump clap from 3.1.8 to 3.1.14 (#59) 2022-05-02 08:35:10 +00:00
Stefan Ellmauthaler
b8518ee340
Merge pull request #57 from ellmau/documentation/acks
Fix link location in all readme files for KBS
2022-04-28 09:57:35 +02:00
9f4689ad45
Fix link location in all readme files for KBS 2022-04-28 09:51:26 +02:00
Stefan Ellmauthaler
43da337ce9
Add acknowledgements (#55)
Update Acknowledgements and Legal Disclaimer
2022-04-27 11:56:24 +02:00
Stefan Ellmauthaler
35bb36bfc5
Update crates.io usage for the binary (#53)
* Change the binary name to be written in kebab-case as a distinction to the snake-case library name
* Use the crates.io crate for the binary now
2022-04-22 13:34:08 +02:00
75 changed files with 14321 additions and 775 deletions

5
.dockerignore Normal file
View File

@ -0,0 +1,5 @@
frontend/node_modules
frontend/.parcel-cache
server/mongodb-data
target/debug

2
.envrc
View File

@ -1 +1 @@
eval "$(lorri direnv)"
use flake

139
.github/workflows/combine-prs.yml vendored Normal file
View File

@ -0,0 +1,139 @@
name: 'Combine PRs'
# Controls when the action will run - in this case triggered manually
on:
workflow_dispatch:
inputs:
branchPrefix:
description: 'Branch prefix to find combinable PRs based on'
required: true
default: 'dependabot'
mustBeGreen:
description: 'Only combine PRs that are green (status is success)'
required: true
default: true
combineBranchName:
description: 'Name of the branch to combine PRs into'
required: true
default: 'combine-prs-branch'
ignoreLabel:
description: 'Exclude PRs with this label'
required: true
default: 'nocombine'
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "combine-prs"
combine-prs:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- uses: actions/github-script@v3
id: fetch-branch-names
name: Fetch branch names
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const pulls = await github.paginate('GET /repos/:owner/:repo/pulls', {
owner: context.repo.owner,
repo: context.repo.repo
});
branches = [];
prs = [];
base_branch = null;
for (const pull of pulls) {
const branch = pull['head']['ref'];
console.log('Pull for branch: ' + branch);
if (branch.startsWith('${{ github.event.inputs.branchPrefix }}')) {
console.log('Branch matched: ' + branch);
statusOK = true;
if(${{ github.event.inputs.mustBeGreen }}) {
console.log('Checking green status: ' + branch);
const statuses = await github.paginate('GET /repos/{owner}/{repo}/commits/{ref}/status', {
owner: context.repo.owner,
repo: context.repo.repo,
ref: branch
});
if(statuses.length > 0) {
const latest_status = statuses[0]['state'];
console.log('Validating status: ' + latest_status);
if(latest_status != 'success') {
console.log('Discarding ' + branch + ' with status ' + latest_status);
statusOK = false;
}
}
}
console.log('Checking labels: ' + branch);
const labels = pull['labels'];
for(const label of labels) {
const labelName = label['name'];
console.log('Checking label: ' + labelName);
if(labelName == '${{ github.event.inputs.ignoreLabel }}') {
console.log('Discarding ' + branch + ' with label ' + labelName);
statusOK = false;
}
}
if (statusOK) {
console.log('Adding branch to array: ' + branch);
branches.push(branch);
prs.push('#' + pull['number'] + ' ' + pull['title']);
base_branch = pull['base']['ref'];
}
}
}
if (branches.length == 0) {
core.setFailed('No PRs/branches matched criteria');
return;
}
core.setOutput('base-branch', base_branch);
core.setOutput('prs-string', prs.join('\n'));
combined = branches.join(' ')
console.log('Combined: ' + combined);
return combined
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v2.3.3
with:
fetch-depth: 0
# Creates a branch with other PR branches merged together
- name: Created combined branch
env:
BASE_BRANCH: ${{ steps.fetch-branch-names.outputs.base-branch }}
BRANCHES_TO_COMBINE: ${{ steps.fetch-branch-names.outputs.result }}
COMBINE_BRANCH_NAME: ${{ github.event.inputs.combineBranchName }}
run: |
echo "$BRANCHES_TO_COMBINE"
sourcebranches="${BRANCHES_TO_COMBINE%\"}"
sourcebranches="${sourcebranches#\"}"
basebranch="${BASE_BRANCH%\"}"
basebranch="${basebranch#\"}"
git config pull.rebase false
git config user.name github-actions
git config user.email github-actions@github.com
git branch $COMBINE_BRANCH_NAME $basebranch
git checkout $COMBINE_BRANCH_NAME
git pull origin $sourcebranches --no-edit
git push origin $COMBINE_BRANCH_NAME
# Creates a PR with the new combined branch
- uses: actions/github-script@v3
name: Create Combined Pull Request
env:
PRS_STRING: ${{ steps.fetch-branch-names.outputs.prs-string }}
with:
github-token: ${{secrets.GITHUB_TOKEN}}
script: |
const prString = process.env.PRS_STRING;
const body = 'This PR was created by the Combine PRs action by combining the following PRs:\n' + prString;
await github.pulls.create({
owner: context.repo.owner,
repo: context.repo.repo,
title: 'Combined PR',
head: '${{ github.event.inputs.combineBranchName }}',
base: '${{ steps.fetch-branch-names.outputs.base-branch }}',
body: body
});

View File

@ -7,28 +7,28 @@ name: DevSkim
on:
push:
branches: [ main ]
branches: [ "main" ]
pull_request:
branches: [ main ]
branches: [ "main" ]
schedule:
- cron: '26 6 * * 5'
- cron: '15 6 * * 4'
jobs:
lint:
name: DevSkim
runs-on: ubuntu-20.04
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
steps:
- name: Checkout code
uses: actions/checkout@v2
uses: actions/checkout@v4
- name: Run DevSkim scanner
uses: microsoft/DevSkim-Action@v1
- name: Upload DevSkim scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v1
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: devskim-results.sarif

View File

@ -4,6 +4,7 @@ on:
pull_request:
branches:
- main
- develop
env:
RUST_BACKTRACE: 1
@ -28,6 +29,7 @@ jobs:
- run: cargo test --verbose --workspace
- run: cargo test --verbose --workspace --all-features
- run: cargo test --verbose --workspace --no-default-features
- run: cargo test --verbose --workspace --no-default-features -F benchmark
clippy:
name: Lint with clippy
@ -42,6 +44,7 @@ jobs:
- run: cargo clippy --workspace --all-targets --verbose
- run: cargo clippy --workspace --all-targets --verbose --no-default-features
- run: cargo clippy --workspace --all-targets --verbose --all-features
- run: cargo clippy --workspace --all-targets --verbose --no-default-features -F benchmark
rustfmt:
name: Verify code formatting

6
.gitignore vendored
View File

@ -21,3 +21,9 @@ tramp
*_flymake*
/tests/out/
# Ignore direnv data
/.direnv/
# ignore perfdata
perf.data

62
CITATION.cff Normal file
View File

@ -0,0 +1,62 @@
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: >-
Abstract Dialectical Frameworks solved by Binary
Decision Diagrams; developed in Dresden (ADF-BDD)
message: >-
If you use this software, please cite it using the
metadata from this file. Note that related conference papers are accepted will appear soon.
type: software
authors:
- given-names: Stefan
family-names: Ellmauthaler
email: stefan.ellmauthaler@tu-dresden.de
affiliation: 'KBS, TU Dresden'
orcid: 'https://orcid.org/0000-0003-3882-4286'
repository-code: 'https://github.com/ellmau/adf-obdd'
url: 'https://ellmau.github.io/adf-obdd/'
abstract: >-
Solver for ADFs grounded, complete, and stable
semantics by utilising OBDDs - ordered binary
decision diagrams.
keywords:
- binary decision diagrams
- argumentation frameworks
- argumentation tools
license: MIT
commit: 35bb36bfc5ee47b2ad864ead48907fdca5fc5ec4
version: v0.2.4-beta.1
date-released: '2022-04-22'
preferred-citation:
authors:
- given-names: Stefan
family-names: Ellmauthaler
email: stefan.ellmauthaler@tu-dresden.de
affiliation: 'KBS, TU Dresden'
orcid: 'https://orcid.org/0000-0003-3882-4286'
- given-names: Sarah Allice
family-names: Gaggl
email: sarah.gaggl@tu-dresden.de
affiliation: 'TU Dresden'
orcid: 'https://orcid.org/0000-0003-2425-6089'
- given-names: Dominik
family-names: Rusovac
email: dominik.rusovac@tu-dresden.de
affiliation: 'TU Dresden'
orcid: 'https://orcid.org/0000-0002-3172-5827'
- given-names: Johannes Peter
family-names: Wallner
email: wallner@ist.tugraz.at
affiliation: 'TU Graz'
orcid: 'https://orcid.org/0000-0002-3051-1966'
title: "ADF-BDD: An ADF Solver Based on Binary Decision Diagrams"
type: conference
conference:
name: 9th International Conference on Computational Models of Argument
location: Cardiff
alias: COMMA
website: 'https://comma22.cs.cf.ac.uk/'
year: 2022

3507
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
[workspace]
members=[ "lib", "bin" ]
members=[ "lib", "bin", "server" ]
default-members = [ "lib" ]
[profile.release]

36
Dockerfile Normal file
View File

@ -0,0 +1,36 @@
# 1. BUILD-CONTAINER: Frontend
FROM node:hydrogen-alpine
WORKDIR /root
COPY ./frontend /root
RUN yarn && yarn build
# 2. BUILD-CONTAINER: Server
FROM rust:alpine
WORKDIR /root
RUN apk add --no-cache musl-dev
COPY ./bin /root/bin
COPY ./lib /root/lib
COPY ./server /root/server
COPY ./Cargo.toml /root/Cargo.toml
COPY ./Cargo.lock /root/Cargo.lock
RUN cargo build --workspace --release
# 3. RUNTIME-CONTAINER: run server with frontend as assets
FROM alpine:latest
WORKDIR /root
COPY --from=0 /root/dist /root/assets
COPY --from=1 /root/target/release/adf-bdd-server /root/server
EXPOSE 8080
ENTRYPOINT ["./server"]

129
README.md
View File

@ -1,6 +1,26 @@
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ellmau/adf-obdd/Code%20coverage%20with%20tarpaulin) [![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd) ![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases) ![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd) [![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases) [![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
[![Crates.io](https://img.shields.io/crates/v/adf-bdd-bin?label=crates.io%20%28bin%29)](https://crates.io/crates/adf-bdd-bin)
[![Crates.io](https://img.shields.io/crates/v/adf_bdd?label=crates.io%20%28lib%29)](https://crates.io/crates/adf_bdd)
[![docs.rs](https://img.shields.io/docsrs/adf_bdd?label=docs.rs)](https://docs.rs/adf_bdd/latest/adf_bdd/)
![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/ellmau/adf-obdd/codecov.yml?branch=main)
[![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd)
![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases)
![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd)
[![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases)
![Crates.io](https://img.shields.io/crates/l/adf_bdd)
[![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
# Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF-BDD)
# Abstract Dialectical Frameworks solved by (ordered) Binary Decision Diagrams; developed in Dresden (ADF-oBDD project)
This project is currently split into three parts:
- a [binary (adf-bdd)](bin), which allows one to easily answer semantics questions on abstract dialectical frameworks
- a [library (adf_bdd)](lib), which contains all the necessary algorithms and an open API which compute the answers to the semantics questions
- a [server](server) and a [frontend](frontend) to access the solver as a web-service available at https://adf-bdd.dev
Latest documentation of the API can be found [here](https://docs.rs/adf_bdd/latest/adf_bdd/).
The current version of the binary can be downloaded [here](https://github.com/ellmau/adf-obdd/releases).
Do not hesitate to report bugs or ask about features in the [issues-section](https://github.com/ellmau/adf-obdd/issues) or have a conversation about anything of the project in the [discussion space](https://github.com/ellmau/adf-obdd/discussions)
## Abstract Dialectical Frameworks
@ -8,59 +28,6 @@ An abstract dialectical framework (ADF) consists of abstract statements. Each st
## Ordered Binary Decision Diagram
An ordered binary decision diagram is a normalised representation of binary functions, where satisfiability- and validity checks can be done relatively cheap.
## Usage of the binary
```
USAGE:
adf_bdd [OPTIONS] <INPUT>
ARGS:
<INPUT> Input filename
OPTIONS:
--an Sorts variables in an alphanumeric manner
--com Compute the complete models
--counter <COUNTER> Set if the (counter-)models shall be computed and printed,
possible values are 'nai' and 'mem' for naive and memoization
repectively (only works in hybrid and naive mode)
--export <EXPORT> Export the adf-bdd state after parsing and BDD instantiation to
the given filename
--grd Compute the grounded model
-h, --help Print help information
--import Import an adf- bdd state instead of an adf
--lib <IMPLEMENTATION> choose the bdd implementation of either 'biodivine', 'naive', or
hybrid [default: hybrid]
--lx Sorts variables in an lexicographic manner
-q Sets log verbosity to only errors
--rust_log <RUST_LOG> Sets the verbosity to 'warn', 'info', 'debug' or 'trace' if -v and
-q are not use [env: RUST_LOG=debug]
--stm Compute the stable models
--stmca Compute the stable models with the help of modelcounting using
heuristics a
--stmcb Compute the stable models with the help of modelcounting using
heuristics b
--stmpre Compute the stable models with a pre-filter (only hybrid lib-mode)
--stmrew Compute the stable models with a single-formula rewriting (only
hybrid lib-mode)
--stmrew2 Compute the stable models with a single-formula rewriting on
internal representation(only hybrid lib-mode)
-v Sets log verbosity (multiple times means more verbose)
-V, --version Print version information
```
Note that import and export only works if the naive library is chosen
Right now there is no additional information to the computed models, so if you use `--com --grd --stm` as the command line arguments the borders between the results are not obviously communicated.
They can be easily identified though:
- The computation is always in the same order
- grd
- com
- stm
- We know that there is always exactly one grounded model
- We know that there always exist at least one complete model (i.e. the grounded one)
- We know that there does not need to exist a stable model
- We know that every stable model is a complete model too
## Input-file format:
Each statement is defined by an ASP-style unary predicate s, where the enclosed term represents the label of the statement.
The binary predicate ac relates each statement to one propositional formula in prefix notation, with the logical operations and constants as follows:
@ -74,52 +41,22 @@ The binary predicate ac relates each statement to one propositional formula in p
# Features
`adhoccounting` will cache the modelcount on-the-fly during the construction of the BDD
- `adhoccounting` will cache the modelcount on-the-fly during the construction of the BDD
- `adhoccountmodels` allows in addition to compute the models ad-hoc too. Note that the memoization approach for modelcounting does not work correctly if `adhoccounting` is set and `adhoccountmodels` is not.
# Development notes
Additional information for contribution, testing, and development in general can be found here.
## Contributing to the project
You want to help and contribute to the project? That is great. Please see the [contributing guidelines](https://github.com/ellmau/adf-obdd/blob/main/.github/CONTRIBUTING.md) first.
## Building the binary:
To build the binary, you need to run
```bash
$> cargo build --workspace --release
```
# Acknowledgements
This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, [Center for Perspicuous Systems](https://www.perspicuous-computing.science/)),
the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the
[Center for Scalable Data Analytics and Artificial Intelligence](https://www.scads.de) (ScaDS.AI),
and by the [Center for Advancing Electronics Dresden](https://cfaed.tu-dresden.de) (cfaed).
To build the binary with debug-symbols, run
```bash
$> cargo build --workspace
```
# Affiliation
This work has been partly developed by the [Knowledge-Based Systems Group](http://kbs.inf.tu-dresden.de/), [Faculty of Computer Science](https://tu-dresden.de/ing/informatik) of [TU Dresden](https://tu-dresden.de).
## Testing with the `res` folder:
To run all the tests placed in the submodule you need to run
```bash
$> git submodule init
```
at the first time.
Afterwards you need to update the content of the submodule to be on the currently used revision by
```bash
$> git submodule update
```
The tests can be started by using the test-framework of cargo, i.e.
```bash
$> cargo test
```
Note that some of the instances are quite big and it might take some time to finish all the tests.
If you do not initialise the submodule, tests will "only" run on the other unit-tests and (possibly forthcoming) other integration tests.
Due to the way of the generated test-modules you need to call
```bash
$> cargo clean
```
if you change some of your test-cases.
To remove the tests just type
```bash
$> git submodule deinit res/adf-instances
```
or
```bash
$> git submodule deinit --all
```
# Disclaimer
Hosting content here does not establish any formal or legal relation to TU Dresden.

View File

@ -1,34 +1,41 @@
[package]
name = "adf_bdd-solver"
version = "0.2.4"
name = "adf-bdd-bin"
version = "0.3.0-dev"
authors = ["Stefan Ellmauthaler <stefan.ellmauthaler@tu-dresden.de>"]
edition = "2021"
homepage = "https://ellmau.github.io/adf-obdd"
repository = "https://github.com/ellmau/adf-obdd"
license = "MIT"
exclude = ["res/", "./flake*", "*.nix", ".envrc", "_config.yml"]
exclude = ["res/", "./flake*", "*.nix", ".envrc", "_config.yml", "tarpaulin-report.*", "*~"]
description = "Solver for ADFs grounded, complete, and stable semantics by utilising OBDDs - ordered binary decision diagrams"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[[bin]]
name = "adf_bdd"
name = "adf-bdd"
path = "src/main.rs"
[dependencies]
adf_bdd = { path = "../lib", default-features = false }
clap = {version = "3.1.8", features = [ "derive", "cargo", "env" ]}
adf_bdd = { version="0.3.1", path="../lib", default-features = false }
clap = {version = "4.3.0", features = [ "derive", "cargo", "env" ]}
log = { version = "0.4", features = [ "max_level_trace", "release_max_level_info" ] }
serde = { version = "1.0", features = ["derive","rc"] }
serde_json = "1.0"
env_logger = "0.9"
env_logger = "0.10"
strum = { version = "0.24" }
crossbeam-channel = "0.5"
[dev-dependencies]
assert_cmd = "2.0"
predicates = "2.1"
predicates = "3.0"
assert_fs = "1.0"
[features]
default = ["adhoccounting", "variablelist", "adf_bdd/default" ]
default = ["adhoccounting", "variablelist", "adf_bdd/default", "frontend"]
adhoccounting = ["adf_bdd/adhoccounting"] # count models ad-hoc - disable if counting is not needed
importexport = ["adf_bdd/importexport"]
variablelist = [ "HashSet", "adf_bdd/variablelist" ]
HashSet = ["adf_bdd/HashSet"]
adhoccountmodels = ["adf_bdd/adhoccountmodels"]
benchmark = ["adf_bdd/benchmark"]
frontend = ["adf_bdd/frontend"]

View File

@ -1,4 +1,12 @@
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ellmau/adf-obdd/Code%20coverage%20with%20tarpaulin) [![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd) ![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases) ![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd) [![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases) [![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
[![Crates.io](https://img.shields.io/crates/v/adf-bdd-bin)](https://crates.io/crates/adf-bdd-bin)
![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/ellmau/adf-obdd/codecov.yml?branch=main)
[![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd)
![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases)
![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from)
![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd)
[![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases)
[![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions)
![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
# Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF-BDD)
This is the readme for the executable solver.
@ -11,7 +19,7 @@ An ordered binary decision diagram is a normalised representation of binary func
## Usage
```
USAGE:
adf_bdd [OPTIONS] <INPUT>
adf-bdd [OPTIONS] <INPUT>
ARGS:
<INPUT> Input filename
@ -26,8 +34,11 @@ OPTIONS:
the given filename
--grd Compute the grounded model
-h, --help Print help information
--heu <HEU> Choose which heuristics shall be used by the nogood-learning
approach [possible values: Simple, MinModMinPathsMaxVarImp,
MinModMaxVarImpMinPaths]
--import Import an adf- bdd state instead of an adf
--lib <IMPLEMENTATION> choose the bdd implementation of either 'biodivine', 'naive', or
--lib <IMPLEMENTATION> Choose the bdd implementation of either 'biodivine', 'naive', or
hybrid [default: hybrid]
--lx Sorts variables in an lexicographic manner
-q Sets log verbosity to only errors
@ -38,11 +49,14 @@ OPTIONS:
heuristics a
--stmcb Compute the stable models with the help of modelcounting using
heuristics b
--stmng Compute the stable models with the nogood-learning based approach
--stmpre Compute the stable models with a pre-filter (only hybrid lib-mode)
--stmrew Compute the stable models with a single-formula rewriting (only
hybrid lib-mode)
--stmrew2 Compute the stable models with a single-formula rewriting on
internal representation(only hybrid lib-mode)
--twoval Compute the two valued models with the nogood-learning based
approach
-v Sets log verbosity (multiple times means more verbose)
-V, --version Print version information
```
@ -113,3 +127,15 @@ or
```bash
$> git submodule deinit --all
```
# Acknowledgements
This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, [Center for Perspicuous Systems](https://www.perspicuous-computing.science/)),
the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the
[Center for Scalable Data Analytics and Artificial Intelligence](https://www.scads.de) (ScaDS.AI),
and by the [Center for Advancing Electronics Dresden](https://cfaed.tu-dresden.de) (cfaed).
# Affiliation
This work has been partly developed by the [Knowledge-Based Systems Group](http://kbs.inf.tu-dresden.de/), [Faculty of Computer Science](https://tu-dresden.de/ing/informatik) of [TU Dresden](https://tu-dresden.de).
# Disclaimer
Hosting content here does not establish any formal or legal relation to TU Dresden.

View File

@ -17,7 +17,7 @@ In addition some further features, like counter-model counting is not supported
# Usage
```plain
USAGE:
adf_bdd [OPTIONS] <INPUT>
adf-bdd [OPTIONS] <INPUT>
ARGS:
<INPUT> Input filename
@ -32,20 +32,29 @@ OPTIONS:
the given filename
--grd Compute the grounded model
-h, --help Print help information
--heu <HEU> Choose which heuristics shall be used by the nogood-learning
approach [possible values: Simple, MinModMinPathsMaxVarImp,
MinModMaxVarImpMinPaths]
--import Import an adf- bdd state instead of an adf
--lib <IMPLEMENTATION> choose the bdd implementation of either 'biodivine', 'naive', or
--lib <IMPLEMENTATION> Choose the bdd implementation of either 'biodivine', 'naive', or
hybrid [default: hybrid]
--lx Sorts variables in an lexicographic manner
-q Sets log verbosity to only errors
--rust_log <RUST_LOG> Sets the verbosity to 'warn', 'info', 'debug' or 'trace' if -v and
-q are not use [env: RUST_LOG=debug]
--stm Compute the stable models
--stmc Compute the stable models with the help of modelcounting
--stmca Compute the stable models with the help of modelcounting using
heuristics a
--stmcb Compute the stable models with the help of modelcounting using
heuristics b
--stmng Compute the stable models with the nogood-learning based approach
--stmpre Compute the stable models with a pre-filter (only hybrid lib-mode)
--stmrew Compute the stable models with a single-formula rewriting (only
hybrid lib-mode)
--stmrew2 Compute the stable models with a single-formula rewriting on
internal representation(only hybrid lib-mode)
--twoval Compute the two valued models with the nogood-learning based
approach
-v Sets log verbosity (multiple times means more verbose)
-V, --version Print version information
```
@ -54,7 +63,6 @@ OPTIONS:
#![deny(
missing_debug_implementations,
missing_copy_implementations,
missing_copy_implementations,
trivial_casts,
trivial_numeric_casts,
unsafe_code
@ -74,63 +82,74 @@ use adf_bdd::adfbiodivine::Adf as BdAdf;
use adf_bdd::parser::AdfParser;
use clap::Parser;
use crossbeam_channel::unbounded;
use strum::VariantNames;
#[derive(Parser, Debug)]
#[clap(author, version, about)]
#[command(author, version, about)]
struct App {
/// Input filename
#[clap(parse(from_os_str))]
#[arg(value_parser)]
input: PathBuf,
/// Sets the verbosity to 'warn', 'info', 'debug' or 'trace' if -v and -q are not use
#[clap(long = "rust_log", env)]
#[arg(long = "rust_log", env)]
rust_log: Option<String>,
/// choose the bdd implementation of either 'biodivine', 'naive', or hybrid
#[clap(long = "lib", default_value = "hybrid")]
/// Choose the bdd implementation of either 'biodivine', 'naive', or hybrid
#[arg(long = "lib", default_value = "hybrid")]
implementation: String,
/// Sets log verbosity (multiple times means more verbose)
#[clap(short, parse(from_occurrences), group = "verbosity")]
#[arg(short, action = clap::builder::ArgAction::Count, group = "verbosity")]
verbose: u8,
/// Sets log verbosity to only errors
#[clap(short, group = "verbosity")]
#[arg(short, group = "verbosity")]
quiet: bool,
/// Sorts variables in an lexicographic manner
#[clap(long = "lx", group = "sorting")]
#[arg(long = "lx", group = "sorting")]
sort_lex: bool,
/// Sorts variables in an alphanumeric manner
#[clap(long = "an", group = "sorting")]
#[arg(long = "an", group = "sorting")]
sort_alphan: bool,
/// Compute the grounded model
#[clap(long = "grd")]
#[arg(long = "grd")]
grounded: bool,
/// Compute the stable models
#[clap(long = "stm")]
#[arg(long = "stm")]
stable: bool,
/// Compute the stable models with the help of modelcounting using heuristics a
#[clap(long = "stmca")]
#[arg(long = "stmca")]
stable_counting_a: bool,
/// Compute the stable models with the help of modelcounting using heuristics b
#[clap(long = "stmcb")]
#[arg(long = "stmcb")]
stable_counting_b: bool,
/// Compute the stable models with a pre-filter (only hybrid lib-mode)
#[clap(long = "stmpre")]
#[arg(long = "stmpre")]
stable_pre: bool,
/// Compute the stable models with a single-formula rewriting (only hybrid lib-mode)
#[clap(long = "stmrew")]
#[arg(long = "stmrew")]
stable_rew: bool,
/// Compute the stable models with a single-formula rewriting on internal representation(only hybrid lib-mode)
#[clap(long = "stmrew2")]
#[arg(long = "stmrew2")]
stable_rew2: bool,
/// Compute the stable models with the nogood-learning based approach
#[arg(long = "stmng")]
stable_ng: bool,
/// Choose which heuristics shall be used by the nogood-learning approach
#[arg(long, value_parser = clap::builder::PossibleValuesParser::new(adf_bdd::adf::heuristics::Heuristic::VARIANTS.iter().filter(|&v| v != &"Custom").collect::<Vec<_>>()))]
heu: Option<adf_bdd::adf::heuristics::Heuristic<'static>>,
/// Compute the two valued models with the nogood-learning based approach
#[arg(long = "twoval")]
two_val: bool,
/// Compute the complete models
#[clap(long = "com")]
#[arg(long = "com")]
complete: bool,
/// Import an adf- bdd state instead of an adf
#[clap(long)]
#[arg(long)]
import: bool,
/// Export the adf-bdd state after parsing and BDD instantiation to the given filename
#[clap(long)]
#[arg(long)]
export: Option<PathBuf>,
/// Set if the (counter-)models shall be computed and printed, possible values are 'nai' and 'mem' for naive and memoization repectively (only works in hybrid and naive mode)
#[clap(long)]
#[arg(long)]
counter: Option<String>,
}
@ -161,7 +180,7 @@ impl App {
let input = std::fs::read_to_string(self.input.clone()).expect("Error Reading File");
match self.implementation.as_str() {
"hybrid" => {
let parser = adf_bdd::parser::AdfParser::default();
let parser = AdfParser::default();
match parser.parse()(&input) {
Ok(_) => log::info!("[Done] parsing"),
Err(e) => {
@ -185,14 +204,14 @@ impl App {
Some("nai") => {
let naive_adf = adf.hybrid_step_opt(false);
for ac_counts in naive_adf.formulacounts(false) {
print!("{:?} ", ac_counts);
print!("{ac_counts:?} ");
}
println!();
}
Some("mem") => {
let naive_adf = adf.hybrid_step_opt(false);
for ac_counts in naive_adf.formulacounts(true) {
print!("{:?}", ac_counts);
print!("{ac_counts:?}");
}
println!();
}
@ -215,6 +234,14 @@ impl App {
}
}
if self.two_val {
let (sender, receiver) = unbounded();
naive_adf.two_val_nogood_channel(self.heu.unwrap_or_default(), sender);
for model in receiver.into_iter() {
print!("{}", printer.print_interpretation(&model));
}
}
if self.stable {
for model in naive_adf.stable() {
print!("{}", printer.print_interpretation(&model));
@ -244,12 +271,18 @@ impl App {
print!("{}", printer.print_interpretation(&model));
}
}
if self.stable_ng {
for model in naive_adf.stable_nogood(self.heu.unwrap_or_default()) {
print!("{}", printer.print_interpretation(&model));
}
}
}
"biodivine" => {
if self.counter.is_some() {
log::error!("Modelcounting not supported in biodivine mode");
}
let parser = adf_bdd::parser::AdfParser::default();
let parser = AdfParser::default();
match parser.parse()(&input) {
Ok(_) => log::info!("[Done] parsing"),
Err(e) => {
@ -335,7 +368,7 @@ impl App {
export.to_string_lossy()
);
} else {
let export_file = match File::create(&export) {
let export_file = match File::create(export) {
Err(reason) => {
panic!("couldn't create {}: {}", export.to_string_lossy(), reason)
}
@ -350,13 +383,13 @@ impl App {
match self.counter.as_deref() {
Some("nai") => {
for ac_counts in adf.formulacounts(false) {
print!("{:?} ", ac_counts);
print!("{ac_counts:?} ");
}
println!();
}
Some("mem") => {
for ac_counts in adf.formulacounts(true) {
print!("{:?}", ac_counts);
print!("{ac_counts:?}");
}
println!();
}
@ -380,6 +413,13 @@ impl App {
print!("{}", printer.print_interpretation(&model));
}
}
if self.stable_ng {
let printer = adf.print_dictionary();
for model in adf.stable_nogood(self.heu.unwrap_or_default()) {
print!("{}", printer.print_interpretation(&model));
}
}
}
}
}

View File

@ -5,29 +5,29 @@ use std::process::Command; // Run programs
#[test]
fn arguments() -> Result<(), Box<dyn std::error::Error>> {
let mut cmd = Command::cargo_bin("adf_bdd")?;
let mut cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg("-vvv").arg("--lx").arg("file.txt");
cmd.assert()
.failure()
.stderr(predicate::str::contains("No such file or directory"));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg("-v").arg("--lx").arg("--an").arg("file.txt");
cmd.assert().failure().stderr(predicate::str::contains(
"The argument '--lx' cannot be used with '--an'",
"argument '--lx' cannot be used with '--an'",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg("-h");
cmd.assert().success().stdout(predicate::str::contains(
"stefan.ellmauthaler@tu-dresden.de",
));
cmd.assert()
.success()
.stdout(predicate::str::contains("adf-bdd [OPTIONS] <INPUT>"));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg("--version");
cmd.assert()
.success()
.stdout(predicate::str::contains("adf_bdd-solver "));
.stdout(predicate::str::contains("adf-bdd-bin "));
Ok(())
}
@ -38,14 +38,14 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
let wrong_file = assert_fs::NamedTempFile::new("wrong_format.adf")?;
wrong_file.write_str("s(7).s(4).s(8).s(3).s(5).s(9).s(10).s(1).s(6).s(2).ac(7,or(or(and(7,neg(1)),neg(9)),3)).ac(4,5).ac(8,or(or(8,1),neg(7))).ac(3,or(and(or(6,7),neg(and(6,7))),neg(2))).ac(5,c(f)).ac(9,and(neg(7),2)).ac(10,or(neg(2),6)).ac(1,and(or(or(neg(2),neg(1)),8),7)).ac(6,and(and(neg(2),10),and(or(7,4),neg(and(7,4))))).ac(2,and(and(and(neg(10),3),neg(6)),or(9,1)))).")?;
let mut cmd = Command::cargo_bin("adf_bdd")?;
let mut cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(wrong_file.path());
cmd.assert()
.failure()
.stderr(predicate::str::contains("code: Eof"));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("-vv")
.arg("--grd")
@ -55,7 +55,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(7) F(4) u(8) u(3) F(5) u(9) u(10) u(1) u(6) u(2)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("-q")
.arg("--grd")
@ -65,7 +65,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(7) F(4) u(8) u(3) F(5) u(9) u(10) u(1) u(6) u(2)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--lx")
.arg("-v")
@ -76,7 +76,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(10) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -87,7 +87,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.env_clear();
cmd.arg(file.path())
.arg("--an")
@ -98,7 +98,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -110,7 +110,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -124,7 +124,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
let tempdir = assert_fs::TempDir::new()?;
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -136,7 +136,7 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -148,7 +148,9 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
#[cfg(feature = "importexport")]
{
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(tempdir.path().with_file_name("test.json"))
.arg("--an")
.arg("--grd")
@ -158,8 +160,9 @@ fn runs_naive() -> Result<(), Box<dyn std::error::Error>> {
cmd.assert().success().stdout(predicate::str::contains(
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
}
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--com")
@ -180,45 +183,45 @@ fn runs_biodivine() -> Result<(), Box<dyn std::error::Error>> {
let wrong_file = assert_fs::NamedTempFile::new("wrong_format.adf")?;
wrong_file.write_str("s(7).s(4).s(8).s(3).s(5).s(9).s(10).s(1).s(6).s(2).ac(7,or(or(and(7,neg(1)),neg(9)),3)).ac(4,5).ac(8,or(or(8,1),neg(7))).ac(3,or(and(or(6,7),neg(and(6,7))),neg(2))).ac(5,c(f)).ac(9,and(neg(7),2)).ac(10,or(neg(2),6)).ac(1,and(or(or(neg(2),neg(1)),8),7)).ac(6,and(and(neg(2),10),and(or(7,4),neg(and(7,4))))).ac(2,and(and(and(neg(10),3),neg(6)),or(9,1)))).")?;
let mut cmd = Command::cargo_bin("adf_bdd")?;
let mut cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(wrong_file.path());
cmd.assert()
.failure()
.stderr(predicate::str::contains("code: Eof"));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path()).arg("-vv").arg("--grd");
cmd.assert().success().stdout(predicate::str::contains(
"u(7) F(4) u(8) u(3) F(5) u(9) u(10) u(1) u(6) u(2)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path()).arg("-q").arg("--grd");
cmd.assert().success().stdout(predicate::str::contains(
"u(7) F(4) u(8) u(3) F(5) u(9) u(10) u(1) u(6) u(2)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path()).arg("--lx").arg("-v").arg("--grd");
cmd.assert().success().stdout(predicate::str::contains(
"u(1) u(10) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path()).arg("--an").arg("--grd").arg("--stm");
cmd.assert().success().stdout(predicate::str::contains(
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.env_clear();
cmd.arg(file.path()).arg("--an").arg("--grd");
cmd.assert().success().stdout(predicate::str::contains(
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -228,7 +231,7 @@ fn runs_biodivine() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -237,7 +240,7 @@ fn runs_biodivine() -> Result<(), Box<dyn std::error::Error>> {
cmd.assert().success().stdout(predicate::str::contains(
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--com")
@ -256,14 +259,14 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
let wrong_file = assert_fs::NamedTempFile::new("wrong_format.adf")?;
wrong_file.write_str("s(7).s(4).s(8).s(3).s(5).s(9).s(10).s(1).s(6).s(2).ac(7,or(or(and(7,neg(1)),neg(9)),3)).ac(4,5).ac(8,or(or(8,1),neg(7))).ac(3,or(and(or(6,7),neg(and(6,7))),neg(2))).ac(5,c(f)).ac(9,and(neg(7),2)).ac(10,or(neg(2),6)).ac(1,and(or(or(neg(2),neg(1)),8),7)).ac(6,and(and(neg(2),10),and(or(7,4),neg(and(7,4))))).ac(2,and(and(and(neg(10),3),neg(6)),or(9,1)))).")?;
let mut cmd = Command::cargo_bin("adf_bdd")?;
let mut cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(wrong_file.path());
cmd.assert()
.failure()
.stderr(predicate::str::contains("code: Eof"));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("-vv")
.arg("--grd")
@ -273,7 +276,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
"u(7) F(4) u(8) u(3) F(5) u(9) u(10) u(1) u(6) u(2)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("-q")
.arg("--grd")
@ -283,7 +286,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
"u(7) F(4) u(8) u(3) F(5) u(9) u(10) u(1) u(6) u(2)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--lx")
.arg("-v")
@ -294,7 +297,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(10) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9)",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -305,7 +308,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.env_clear();
cmd.arg(file.path())
.arg("--an")
@ -316,7 +319,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -328,7 +331,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--grd")
@ -339,7 +342,7 @@ fn runs_biodivine_hybrid() -> Result<(), Box<dyn std::error::Error>> {
cmd.assert().success().stdout(predicate::str::contains(
"u(1) u(2) u(3) F(4) F(5) u(6) u(7) u(8) u(9) u(10) \n",
));
cmd = Command::cargo_bin("adf_bdd")?;
cmd = Command::cargo_bin("adf-bdd")?;
cmd.arg(file.path())
.arg("--an")
.arg("--com")

3
docs/_config.yml Normal file
View File

@ -0,0 +1,3 @@
theme: jekyll-theme-architect
show_downloads: false
markdown: kramdown

139
docs/adf-bdd.md Normal file
View File

@ -0,0 +1,139 @@
[![Crates.io](https://img.shields.io/crates/v/adf_bdd)](https://crates.io/crates/adf_bdd)
[![docs.rs](https://img.shields.io/docsrs/adf_bdd?label=docs.rs)](https://docs.rs/adf_bdd/latest/adf_bdd/)
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ellmau/adf-obdd/Code%20coverage%20with%20tarpaulin)
[![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd)
![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases)
![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd)
[![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases)
![Crates.io](https://img.shields.io/crates/l/adf_bdd)
[![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
| [Home](index.md) | [Binary](adf-bdd.md) | [Library](adf_bdd.md)| [Web-Service](https://adf-bdd.dev) | [Repository](https://github.com/ellmau/adf-obdd) |
|--- | --- | --- | --- | --- |
# Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF-BDD)
This is the readme for the executable solver.
## Usage
```
USAGE:
adf-bdd [OPTIONS] <INPUT>
ARGS:
<INPUT> Input filename
OPTIONS:
--an Sorts variables in an alphanumeric manner
--com Compute the complete models
--counter <COUNTER> Set if the (counter-)models shall be computed and printed,
possible values are 'nai' and 'mem' for naive and memoization
repectively (only works in hybrid and naive mode)
--export <EXPORT> Export the adf-bdd state after parsing and BDD instantiation to
the given filename
--grd Compute the grounded model
-h, --help Print help information
--heu <HEU> Choose which heuristics shall be used by the nogood-learning
approach [possible values: Simple, MinModMinPathsMaxVarImp,
MinModMaxVarImpMinPaths]
--import Import an adf- bdd state instead of an adf
--lib <IMPLEMENTATION> Choose the bdd implementation of either 'biodivine', 'naive', or
hybrid [default: hybrid]
--lx Sorts variables in an lexicographic manner
-q Sets log verbosity to only errors
--rust_log <RUST_LOG> Sets the verbosity to 'warn', 'info', 'debug' or 'trace' if -v and
-q are not use [env: RUST_LOG=debug]
--stm Compute the stable models
--stmca Compute the stable models with the help of modelcounting using
heuristics a
--stmcb Compute the stable models with the help of modelcounting using
heuristics b
--stmng Compute the stable models with the nogood-learning based approach
--stmpre Compute the stable models with a pre-filter (only hybrid lib-mode)
--stmrew Compute the stable models with a single-formula rewriting (only
hybrid lib-mode)
--stmrew2 Compute the stable models with a single-formula rewriting on
internal representation(only hybrid lib-mode)
--twoval Compute the two valued models with the nogood-learning based
approach
-v Sets log verbosity (multiple times means more verbose)
-V, --version Print version information
```
Note that import and export only works if the naive library is chosen
Right now there is no additional information to the computed models, so if you use --com --grd --stm the borders between the results are not obviously communicated.
They can be easily identified though:
- The computation is always in the same order
- grd
- com
- stm
- We know that there is always exactly one grounded model
- We know that there always exist at least one complete model (i.e. the grounded one)
- We know that there does not need to exist a stable model
- We know that every stable model is a complete model too
## Input-file format:
Each statement is defined by an ASP-style unary predicate s, where the enclosed term represents the label of the statement.
The binary predicate ac relates each statement to one propositional formula in prefix notation, with the logical operations and constants as follows:
- and(x,y): conjunction
- or(x,y): disjunctin
- iff(x,Y): if and only if
- xor(x,y): exclusive or
- neg(x): classical negation
- c(v): constant symbol "verum" - tautology/top
- c(f): constant symbol "falsum" - inconsistency/bot
# Development notes
To build the binary, you need to run
```bash
$> cargo build --workspace --release
```
To build the binary with debug-symbols, run
```bash
$> cargo build --workspace
```
To run all the tests placed in the submodule you need to run
```bash
$> git submodule init
```
at the first time.
Afterwards you need to update the content of the submodule to be on the currently used revision by
```bash
$> git submodule update
```
The tests can be started by using the test-framework of cargo, i.e.
```bash
$> cargo test
```
Note that some of the instances are quite big and it might take some time to finish all the tests.
If you do not initialise the submodule, tests will "only" run on the other unit-tests and (possibly forthcoming) other integration tests.
Due to the way of the generated test-modules you need to call
```bash
$> cargo clean
```
if you change some of your test-cases.
To remove the tests just type
```bash
$> git submodule deinit res/adf-instances
```
or
```bash
$> git submodule deinit --all
```
# Acknowledgements
This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, [Center for Perspicuous Systems](https://www.perspicuous-computing.science/)),
the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the
[Center for Scalable Data Analytics and Artificial Intelligence](https://www.scads.de) (ScaDS.AI),
and by the [Center for Advancing Electronics Dresden](https://cfaed.tu-dresden.de) (cfaed).
# Affiliation
This work has been partly developed by the [Knowledge-Based Systems Group](http://kbs.inf.tu-dresden.de/), [Faculty of Computer Science](https://tu-dresden.de/ing/informatik) of [TU Dresden](https://tu-dresden.de).
# Disclaimer
Hosting content here does not establish any formal or legal relation to TU Dresden.

167
docs/adf_bdd.md Normal file
View File

@ -0,0 +1,167 @@
[![Crates.io](https://img.shields.io/crates/v/adf_bdd)](https://crates.io/crates/adf_bdd)
[![docs.rs](https://img.shields.io/docsrs/adf_bdd?label=docs.rs)](https://docs.rs/adf_bdd/latest/adf_bdd/)
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ellmau/adf-obdd/Code%20coverage%20with%20tarpaulin)
[![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd)
![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases)
![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd)
[![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases)
![Crates.io](https://img.shields.io/crates/l/adf_bdd)
[![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
| [Home](index.md) | [Binary](adf-bdd.md) | [Library](adf_bdd.md)| [Web-Service](https://adf-bdd.dev) | [Repository](https://github.com/ellmau/adf-obdd) |
|--- | --- | --- | --- | --- |
# Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF_BDD)
This library contains an efficient representation of Abstract Dialectical Frameworks (ADf) by utilising an implementation of Ordered Binary Decision Diagrams (OBDD)
## Noteworthy relations between ADF semantics
They can be easily identified though:
* The computation is always in the same order
* grd
* com
* stm
* We know that there is always exactly one grounded model
* We know that there always exist at least one complete model (i.e. the grounded one)
* We know that there does not need to exist a stable model
* We know that every stable model is a complete model too
## Ordered Binary Decision Diagram
An ordered binary decision diagram is a normalised representation of binary functions, where satisfiability- and validity checks can be done relatively cheap.
Note that one advantage of this implementation is that only one oBDD is used for all acceptance conditions. This can be done because all of them have the identical signature (i.e. the set of all statements + top and bottom concepts). Due to this uniform representation reductions on subformulae which are shared by two or more statements only need to be computed once and is already cached in the data structure for further applications.
The used algorithm to create a BDD, based on a given formula does not perform well on bigger formulae, therefore it is possible to use a state-of-the art library to instantiate the BDD (https://github.com/sybila/biodivine-lib-bdd). It is possible to either stay with the biodivine library or switch back to the variant implemented by adf-bdd. The variant implemented in this library offers reuse of already done reductions and memoisation techniques, which are not offered by biodivine. In addition some further features, like counter-model counting is not supported by biodivine.
Note that import and export only works if the naive library is chosen
## Input-file format:
Each statement is defined by an ASP-style unary predicate s, where the enclosed term represents the label of the statement. The binary predicate ac relates each statement to one propositional formula in prefix notation, with the logical operations and constants as follows:
```plain
and(x,y): conjunction
or(x,y): disjunctin
iff(x,Y): if and only if
xor(x,y): exclusive or
neg(x): classical negation
c(v): constant symbol “verum” - tautology/top
c(f): constant symbol “falsum” - inconsistency/bot
```
### Example input file:
```plain
s(a).
s(b).
s(c).
s(d).
ac(a,c(v)).
ac(b,or(a,b)).
ac(c,neg(b)).
ac(d,d).
```
## Usage examples
First parse a given ADF and sort the statements, if needed.
```rust
use adf_bdd::parser::AdfParser;
use adf_bdd::adf::Adf;
// use the above example as input
let input = "s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,or(a,b)).ac(c,neg(b)).ac(d,d).";
let parser = AdfParser::default();
match parser.parse()(&input) {
Ok(_) => log::info!("[Done] parsing"),
Err(e) => {
log::error!(
"Error during parsing:\n{} \n\n cannot continue, panic!",
e
);
panic!("Parsing failed, see log for further details")
}
}
// sort lexicographic
parser.varsort_lexi();
```
use the naive/in-crate implementation
```rust
// create Adf
let mut adf = Adf::from_parser(&parser);
// compute and print the complete models
let printer = adf.print_dictionary();
for model in adf.complete() {
print!("{}", printer.print_interpretation(&model));
}
```
use the biodivine implementation
```rust
// create Adf
let adf = adf_bdd::adfbiodivine::Adf::from_parser(&parser);
// compute and print the complete models
let printer = adf.print_dictionary();
for model in adf.complete() {
print!("{}", printer.print_interpretation(&model));
}
```
use the hybrid approach implementation
```rust
// create biodivine Adf
let badf = adf_bdd::adfbiodivine::Adf::from_parser(&parser);
// instantiate the internally used adf after the reduction done by biodivine
let mut adf = badf.hybrid_step();
// compute and print the complete models
let printer = adf.print_dictionary();
for model in adf.complete() {
print!("{}", printer.print_interpretation(&model));
}
```
use the new `NoGood`-based algorithm and utilise the new interface with channels:
```rust
use adf_bdd::parser::AdfParser;
use adf_bdd::adf::Adf;
use adf_bdd::adf::heuristics::Heuristic;
use adf_bdd::datatypes::{Term, adf::VarContainer};
// create a channel
let (s, r) = crossbeam_channel::unbounded();
let variables = VarContainer::default();
let variables_worker = variables.clone();
// spawn a solver thread
let solving = std::thread::spawn(move || {
// use the above example as input
let input = "s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,or(a,b)).ac(c,neg(b)).ac(d,d).";
let parser = AdfParser::with_var_container(variables_worker);
parser.parse()(&input).expect("parsing worked well");
// use hybrid approach
let mut adf = adf_bdd::adfbiodivine::Adf::from_parser(&parser).hybrid_step();
// compute stable with the simple heuristic
adf.stable_nogood_channel(Heuristic::Simple, s);
});
let printer = variables.print_dictionary();
// print results as they are computed
while let Ok(result) = r.recv() {
print!("stable model: {:?} \n", result);
// use dictionary
print!("stable model with variable names: {}", printer.print_interpretation(&result));
}
// waiting for the other thread to close
solving.join().unwrap();
```
# Acknowledgements
This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, [Center for Perspicuous Systems](https://www.perspicuous-computing.science/)),
the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the
[Center for Scalable Data Analytics and Artificial Intelligence](https://www.scads.de) (ScaDS.AI),
and by the [Center for Advancing Electronics Dresden](https://cfaed.tu-dresden.de) (cfaed).
# Affiliation
This work has been partly developed by the [Knowledge-Based Systems Group](http://kbs.inf.tu-dresden.de/), [Faculty of Computer Science](https://tu-dresden.de/ing/informatik) of [TU Dresden](https://tu-dresden.de).
# Disclaimer
Hosting content here does not establish any formal or legal relation to TU Dresden.

64
docs/index.md Normal file
View File

@ -0,0 +1,64 @@
[![Crates.io](https://img.shields.io/crates/v/adf_bdd)](https://crates.io/crates/adf_bdd)
[![docs.rs](https://img.shields.io/docsrs/adf_bdd?label=docs.rs)](https://docs.rs/adf_bdd/latest/adf_bdd/)
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ellmau/adf-obdd/Code%20coverage%20with%20tarpaulin)
[![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd)
![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases)
![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd)
[![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases)
![Crates.io](https://img.shields.io/crates/l/adf_bdd)
[![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
| [Home](index.md) | [Binary](adf-bdd.md) | [Library](adf_bdd.md)| [Web-Service](https://adf-bdd.dev) | [Repository](https://github.com/ellmau/adf-obdd) |
|--- | --- | --- | --- | --- |
# Abstract Dialectical Frameworks solved by (ordered) Binary Decision Diagrams; developed in Dresden (ADF-oBDD project)
This project is currently split into three parts:
- a [binary (adf-bdd)](adf-bdd.md), which allows one to easily answer semantics questions on abstract dialectical frameworks
- a [library (adf_bdd)](adf_bdd.md), which contains all the necessary algorithms and an open API which compute the answers to the semantics questions
- a server and a frontend, available at https://adf-bdd.dev
Latest documentation of the API can be found [here](https://docs.rs/adf_bdd/latest/adf_bdd/).
The current version of the binary can be downloaded [here](https://github.com/ellmau/adf-obdd/releases).
Do not hesitate to report bugs or ask about features in the [issues-section](https://github.com/ellmau/adf-obdd/issues) or have a conversation about anything of the project in the [discussion space](https://github.com/ellmau/adf-obdd/discussions)
## Abstract Dialectical Frameworks
An abstract dialectical framework (ADF) consists of abstract statements. Each statement has an unique label and might be related to other statements (s) in the ADF. This relation is defined by a so-called acceptance condition (ac), which intuitively is a propositional formula, where the variable symbols are the labels of the statements. An interpretation is a three valued function which maps to each statement a truth value (true, false, undecided). We call such an interpretation a model, if each acceptance condition agrees to the interpration.
## Ordered Binary Decision Diagram
An ordered binary decision diagram is a normalised representation of binary functions, where satisfiability- and validity checks can be done relatively cheap.
## Input-file format:
Each statement is defined by an ASP-style unary predicate s, where the enclosed term represents the label of the statement.
The binary predicate ac relates each statement to one propositional formula in prefix notation, with the logical operations and constants as follows:
- and(x,y): conjunction
- or(x,y): disjunctin
- iff(x,Y): if and only if
- xor(x,y): exclusive or
- neg(x): classical negation
- c(v): constant symbol "verum" - tautology/top
- c(f): constant symbol "falsum" - inconsistency/bot
# Features
- `adhoccounting` will cache the modelcount on-the-fly during the construction of the BDD
- `adhoccountmodels` allows in addition to compute the models ad-hoc too. Note that the memoization approach for modelcounting does not work correctly if `adhoccounting` is set and `adhoccountmodels` is not.
# Development notes
Additional information for contribution, testing, and development in general can be found here.
## Contributing to the project
You want to help and contribute to the project? That is great. Please see the [contributing guidelines](https://github.com/ellmau/adf-obdd/blob/main/.github/CONTRIBUTING.md) first.
# Acknowledgements
This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, [Center for Perspicuous Systems](https://www.perspicuous-computing.science/)),
the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the
[Center for Scalable Data Analytics and Artificial Intelligence](https://www.scads.de) (ScaDS.AI),
and by the [Center for Advancing Electronics Dresden](https://cfaed.tu-dresden.de) (cfaed).
# Affiliation
This work has been partly developed by the [Knowledge-Based Systems Group](http://kbs.inf.tu-dresden.de/), [Faculty of Computer Science](https://tu-dresden.de/ing/informatik) of [TU Dresden](https://tu-dresden.de).
# Disclaimer
Hosting content here does not establish any formal or legal relation to TU Dresden.

127
flake.lock generated
View File

@ -1,43 +1,33 @@
{
"nodes": {
"flake-compat": {
"flake": false,
"locked": {
"lastModified": 1648199409,
"narHash": "sha256-JwPKdC2PoVBkG6E+eWw3j6BMR6sL3COpYWfif7RVb8Y=",
"owner": "edolstra",
"repo": "flake-compat",
"rev": "64a525ee38886ab9028e6f61790de0832aa3ef03",
"type": "github"
},
"original": {
"owner": "edolstra",
"repo": "flake-compat",
"type": "github"
}
},
"flake-utils": {
"inputs": {
"flake-utils": "flake-utils_2"
},
"locked": {
"lastModified": 1649676176,
"narHash": "sha256-OWKJratjt2RW151VUlJPRALb7OU2S5s+f0vLj4o1bHM=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "a4b154ebbdc88c8498a5c7b01589addc9e9cb678",
"lastModified": 1738591040,
"narHash": "sha256-4WNeriUToshQ/L5J+dTSWC5OJIwT39SEP7V7oylndi8=",
"owner": "gytis-ivaskevicius",
"repo": "flake-utils-plus",
"rev": "afcb15b845e74ac5e998358709b2b5fe42a948d1",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"owner": "gytis-ivaskevicius",
"repo": "flake-utils-plus",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1637014545,
"narHash": "sha256-26IZAc5yzlD9FlDT54io1oqG/bBoyka+FJk5guaX4x4=",
"lastModified": 1694529238,
"narHash": "sha256-zsNZZGTGnMOf9YpHKJqMSsa0dXbfmxeoJ7xHlrt+xmY=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "bba5dcc8e0b20ab664967ad83d24d64cb64ec4f4",
"rev": "ff7b65b44d01cf9ba6a71320833626af21126384",
"type": "github"
},
"original": {
@ -46,91 +36,41 @@
"type": "github"
}
},
"gitignoresrc": {
"flake": false,
"locked": {
"lastModified": 1646480205,
"narHash": "sha256-kekOlTlu45vuK2L9nq8iVN17V3sB0WWPqTTW3a2SQG0=",
"owner": "hercules-ci",
"repo": "gitignore.nix",
"rev": "bff2832ec341cf30acb3a4d3e2e7f1f7b590116a",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "gitignore.nix",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1649619156,
"narHash": "sha256-p0q4zpuKMwrzGF+5ZU7Thnpac5TinhDI9jr2mBxhV4w=",
"lastModified": 1750969886,
"narHash": "sha256-zW/OFnotiz/ndPFdebpo3X0CrbVNf22n4DjN2vxlb58=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "e7d63bd0d50df412f5a1d8acfa3caae75522e347",
"rev": "a676066377a2fe7457369dd37c31fd2263b662f4",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-21.11",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs-unstable": {
"locked": {
"lastModified": 1649497218,
"narHash": "sha256-groqC9m1P4hpnL6jQvZ3C8NEtduhdkvwGT0+0LUrcYw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "fd364d268852561223a5ada15caad669fd72800e",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1637453606,
"narHash": "sha256-Gy6cwUswft9xqsjWxFYEnx/63/qzaFUwatcbV5GF/GQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "8afc4e543663ca0a6a4f496262cd05233737e732",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"ref": "nixos-25.05",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"flake-compat": "flake-compat",
"flake-utils": "flake-utils",
"gitignoresrc": "gitignoresrc",
"nixpkgs": "nixpkgs",
"nixpkgs-unstable": "nixpkgs-unstable",
"rust-overlay": "rust-overlay"
}
},
"rust-overlay": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_2"
"nixpkgs": [
"nixpkgs"
]
},
"locked": {
"lastModified": 1649730901,
"narHash": "sha256-97J+EZ/HJmNU1oxi5WZMFjTIowlBOv4NS7VZ9kW1ZMw=",
"lastModified": 1751251399,
"narHash": "sha256-y+viCuy/eKKpkX1K2gDvXIJI/yzvy6zA3HObapz9XZ0=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "16b72898fa19d72192dce574eab796c9804d5d7e",
"rev": "b22d5ee8c60ed1291521f2dde48784edd6bf695b",
"type": "github"
},
"original": {
@ -138,6 +78,21 @@
"repo": "rust-overlay",
"type": "github"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",

117
flake.nix
View File

@ -1,48 +1,89 @@
{
description = "basic rust flake";
rec {
description = "adf-bdd, Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-21.11";
nixpkgs-unstable.url = "github:NixOS/nixpkgs/nixos-unstable";
rust-overlay.url = "github:oxalica/rust-overlay";
flake-utils.url = "github:numtide/flake-utils";
flake-compat = {
url = "github:edolstra/flake-compat";
flake = false;
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
rust-overlay = {
url = "github:oxalica/rust-overlay";
inputs = {
nixpkgs.follows = "nixpkgs";
flake-utils.follows = "flake-utils/flake-utils";
};
gitignoresrc = {
url = "github:hercules-ci/gitignore.nix";
flake = false;
};
flake-utils.url = "github:gytis-ivaskevicius/flake-utils-plus";
};
outputs = { self, nixpkgs, nixpkgs-unstable, flake-utils, flake-compat, gitignoresrc, rust-overlay, ... }@inputs:
{
#overlay = import ./nix { inherit gitignoresrc; };
} // (flake-utils.lib.eachDefaultSystem (system:
let
unstable = import nixpkgs-unstable { inherit system; };
pkgs = import nixpkgs {
inherit system;
overlays = [ (import rust-overlay)];
outputs = inputs @ {
self,
flake-utils,
rust-overlay,
...
}:
flake-utils.lib.mkFlake {
inherit self inputs;
channels.nixpkgs.overlaysBuilder = channels: [rust-overlay.overlays.default];
outputsBuilder = channels: let
pkgs = channels.nixpkgs;
toolchain = pkgs.rust-bin.stable.latest.default;
platform = pkgs.makeRustPlatform {
cargo = toolchain;
rustc = toolchain;
};
in
rec {
devShell =
pkgs.mkShell {
RUST_LOG = "debug";
RUST_BACKTRACE = 1;
buildInputs = [
pkgs.rust-bin.nightly.latest.rustfmt
pkgs.rust-bin.stable.latest.default
unstable.rust-analyzer
pkgs.cargo-audit
pkgs.cargo-license
pkgs.cargo-tarpaulin
pkgs.cargo-kcov
pkgs.kcov
in rec {
packages = let
cargoMetaBin = (builtins.fromTOML (builtins.readFile ./bin/Cargo.toml)).package;
cargoMetaLib = (builtins.fromTOML (builtins.readFile ./lib/Cargo.toml)).package;
meta = {
inherit description;
homepage = "https://github.com/ellmau/adf-obdd";
license = [pkgs.lib.licenses.mit];
nativeBuildInputs = with platform; [
cargoBuildHook
cargoCheckHook
];
};
}
));
in rec {
adf-bdd = platform.buildRustPackage {
pname = "adf-bdd";
inherit (cargoMetaBin) version;
inherit meta;
src = ./.;
cargoLock.lockFile = ./Cargo.lock;
buildAndTestSubdir = "bin";
};
adf_bdd = platform.buildRustPackage {
pname = "adf_bdd";
inherit (cargoMetaLib) version;
inherit meta;
src = ./.;
cargoLock.lockFile = ./Cargo.lock;
buildAndTestSubdir = "lib";
};
};
devShells.default = pkgs.mkShell {
RUST_LOG = "debug";
RUST_BACKTRACE = 1;
shellHook = ''
export PATH=''${HOME}/.cargo/bin''${PATH+:''${PATH}}
'';
buildInputs = let
notOn = systems:
pkgs.lib.optionals (!builtins.elem pkgs.system systems);
in
[
toolchain
pkgs.rust-analyzer
pkgs.cargo-audit
pkgs.cargo-license
]
++ (notOn ["aarch64-darwin" "x86_64-darwin"] [pkgs.kcov pkgs.gnuplot pkgs.valgrind])
++ (notOn ["aarch64-linux" "aarch64-darwin" "i686-linux"] [pkgs.cargo-tarpaulin]);
};
};
};
}

13
frontend/.editorconfig Normal file
View File

@ -0,0 +1,13 @@
root = true
[*]
end_of_line = lf
insert_final_newline = true
[*.{ts,tsx}]
indent_style = space
indent_size = 2
[package.json]
indent_style = space
indent_size = 2

27
frontend/.eslintrc.js Normal file
View File

@ -0,0 +1,27 @@
module.exports = {
"env": {
"browser": true,
"es2021": true
},
"extends": [
"plugin:react/recommended",
"airbnb",
"airbnb-typescript",
],
"parser": "@typescript-eslint/parser",
"parserOptions": {
"ecmaFeatures": {
"jsx": true
},
"ecmaVersion": "latest",
"sourceType": "module",
"project": "tsconfig.json"
},
"plugins": [
"react",
"@typescript-eslint"
],
"rules": {
"react/jsx-filename-extension": [1, { "extensions": [".tsx"] }]
}
}

5
frontend/.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
node_modules
dist
.parcel-cache
yarn-error.log

13
frontend/README.md Normal file
View File

@ -0,0 +1,13 @@
# Frontend for Webservice
This directory contains the (standalone) frontend for <https://adf-bdd.dev> built using React, Material UI and Typescript.
## Usage
For local development run:
- `yarn install` to install the dependencies
- `yarn run check` to run typechecks and the linter (eslint)
- `yarn start` to start the development server listening on `localhost:1234`
The frontend tries to connect to the server at `localhost:8080` in development mode.

5
frontend/index.d.ts vendored Normal file
View File

@ -0,0 +1,5 @@
declare module 'bundle-text:*' {
const s: string
export default s
}

40
frontend/package.json Normal file
View File

@ -0,0 +1,40 @@
{
"name": "ADF-OBDD-Frontend",
"version": "0.1.0",
"source": "src/index.html",
"browserslist": "> 0.5%, last 2 versions, not dead",
"scripts": {
"check": "tsc --noEmit && eslint ./src",
"start": "parcel",
"build": "parcel build"
},
"devDependencies": {
"@parcel/transformer-inline-string": "2.9.3",
"@types/node": "^20.4.6",
"@types/react": "^18.2.18",
"@types/react-dom": "^18.2.7",
"@typescript-eslint/eslint-plugin": "^6.2.1",
"@typescript-eslint/parser": "^6.2.1",
"eslint": "^8.46.0",
"eslint-config-airbnb": "^19.0.4",
"eslint-config-airbnb-typescript": "^17.1.0",
"eslint-plugin-import": "^2.28.0",
"eslint-plugin-jsx-a11y": "^6.7.1",
"eslint-plugin-react": "^7.33.1",
"parcel": "^2.9.3",
"process": "^0.11.10",
"typescript": "^5.1.6"
},
"dependencies": {
"@antv/g6": "^4.8.20",
"@emotion/react": "^11.11.1",
"@emotion/styled": "^11.11.0",
"@fontsource/roboto": "^5.0.6",
"@mui/icons-material": "^5.14.3",
"@mui/material": "^5.14.3",
"markdown-to-jsx": "^7.2.1",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.14.2"
}
}

8
frontend/shell.nix Normal file
View File

@ -0,0 +1,8 @@
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
buildInputs = [
pkgs.yarn
];
}

13
frontend/src/app.tsx Normal file
View File

@ -0,0 +1,13 @@
import * as React from 'react';
import { createRoot } from 'react-dom/client';
import '@fontsource/roboto/300.css';
import '@fontsource/roboto/400.css';
import '@fontsource/roboto/500.css';
import '@fontsource/roboto/700.css';
import App from './components/app';
const container = document.getElementById('app');
const root = createRoot(container!);
root.render(<App />);

View File

@ -0,0 +1,371 @@
import React, {
useState, useContext, useEffect, useCallback, useRef,
} from 'react';
import { useParams, useNavigate } from 'react-router-dom';
import {
Accordion,
AccordionDetails,
AccordionSummary,
Alert,
AlertColor,
Button,
Chip,
Container,
Grid,
Paper,
Pagination,
Skeleton,
Stack,
Tabs,
Tab,
TextField,
Typography,
} from '@mui/material';
import ExpandMoreIcon from '@mui/icons-material/ExpandMore';
import DetailInfoMd from 'bundle-text:../help-texts/detail-info.md';
import Markdown from './markdown';
import GraphG6, { GraphProps } from './graph-g6';
import LoadingContext from './loading-context';
import SnackbarContext from './snackbar-context';
export type Parsing = 'Naive' | 'Hybrid';
export type StrategySnakeCase = 'parse_only' | 'ground' | 'complete' | 'stable' | 'stable_counting_a' | 'stable_counting_b' | 'stable_nogood';
export type StrategyCamelCase = 'ParseOnly' | 'Ground' | 'Complete' | 'Stable' | 'StableCountingA' | 'StableCountingB' | 'StableNogood';
export const STRATEGIES_WITHOUT_PARSE: StrategyCamelCase[] = ['Ground', 'Complete', 'Stable', 'StableCountingA', 'StableCountingB', 'StableNogood'];
export interface AcAndGraph {
ac: string[],
graph: GraphProps,
}
export type AcsWithGraphsOpt = {
type: 'None',
} | {
type: 'Error',
content: string
} | {
type: 'Some',
content: AcAndGraph[]
};
export type Task = {
type: 'Parse',
} | {
type: 'Solve',
content: StrategyCamelCase,
};
export interface AdfProblemInfo {
name: string,
code: string,
parsing_used: Parsing,
// NOTE: the keys are really only strategies
acs_per_strategy: { [key in StrategySnakeCase]: AcsWithGraphsOpt },
running_tasks: Task[],
}
export function acsWithGraphOptToColor(status: AcsWithGraphsOpt, running: boolean): AlertColor {
if (running) {
return 'warning';
}
switch (status.type) {
case 'None': return 'info';
case 'Error': return 'error';
case 'Some': return 'success';
default:
throw new Error('Unknown type union variant (cannot occur)');
}
}
export function acsWithGraphOptToText(status: AcsWithGraphsOpt, running: boolean): string {
if (running) {
return 'Running';
}
switch (status.type) {
case 'None': return 'Not attempted';
case 'Error': return 'Failed';
case 'Some': return 'Done';
default:
throw new Error('Unknown type union variant (cannot occur)');
}
}
function AdfDetails() {
const { adfName } = useParams();
const navigate = useNavigate();
const { setLoading } = useContext(LoadingContext);
const { status: snackbarInfo, setStatus: setSnackbarInfo } = useContext(SnackbarContext);
const [problem, setProblem] = useState<AdfProblemInfo>();
const [tab, setTab] = useState<StrategySnakeCase>('parse_only');
const [solutionIndex, setSolutionIndex] = useState<number>(0);
const isFirstRender = useRef(true);
const fetchProblem = useCallback(
() => {
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/adf/${adfName}`, {
method: 'GET',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
})
.then((res) => {
switch (res.status) {
case 200:
res.json().then((resProblem) => {
setProblem(resProblem);
});
break;
default:
navigate('/');
break;
}
});
},
[setProblem],
);
const solveHandler = useCallback(
(strategy: StrategyCamelCase) => {
setLoading(true);
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/adf/${adfName}/solve`, {
method: 'PUT',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ strategy }),
})
.then((res) => {
switch (res.status) {
case 200:
setSnackbarInfo({ message: 'Solving problem now...', severity: 'success', potentialUserChange: false });
fetchProblem();
break;
default:
setSnackbarInfo({ message: 'Something went wrong tying to solve the problem.', severity: 'error', potentialUserChange: false });
break;
}
})
.finally(() => setLoading(false));
},
[adfName],
);
const deleteHandler = useCallback(
() => {
setLoading(true);
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/adf/${adfName}`, {
method: 'DELETE',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
})
.then((res) => {
switch (res.status) {
case 200:
setSnackbarInfo({ message: 'ADF Problem deleted.', severity: 'success', potentialUserChange: false });
navigate('/');
break;
default:
break;
}
})
.finally(() => setLoading(false));
},
[adfName],
);
useEffect(
() => {
// TODO: having the info if the user may have changed on the snackbar info
// is a bit lazy and unclean; be better!
if (isFirstRender.current || snackbarInfo?.potentialUserChange) {
isFirstRender.current = false;
fetchProblem();
}
},
[snackbarInfo?.potentialUserChange],
);
useEffect(
() => {
// if there is a running task, fetch problems again after 20 seconds
let timeout: ReturnType<typeof setTimeout>;
if (problem && problem.running_tasks.length > 0) {
timeout = setTimeout(() => fetchProblem(), 20000);
}
return () => {
if (timeout) {
clearTimeout(timeout);
}
};
},
[problem],
);
const acsOpt = problem?.acs_per_strategy[tab];
const acsContent = acsOpt?.type === 'Some' ? acsOpt.content : undefined;
const tabCamelCase: StrategyCamelCase = tab.replace(/^([a-z])/, (_, p1) => p1.toUpperCase()).replace(/_([a-z])/g, (_, p1) => `${p1.toUpperCase()}`) as StrategyCamelCase;
return (
<>
<Typography variant="h3" component="h1" align="center" gutterBottom>
ADF-BDD.DEV
</Typography>
<Container sx={{ marginTop: 2, marginBottom: 2 }}>
<Accordion>
<AccordionSummary expandIcon={<ExpandMoreIcon />}>
<span style={{ fontWeight: 'bold' }}>What can I do with the ADF now?</span>
</AccordionSummary>
<AccordionDetails>
<Grid container alignItems="center" spacing={2}>
<Grid item xs={12} sm={8}>
<Markdown>{DetailInfoMd}</Markdown>
</Grid>
<Grid item xs={12} sm={4}>
<img
src={new URL('../help-texts/example-bdd.png', import.meta.url).toString()}
alt="Example BDD"
style={{ maxWidth: '100%', borderRadius: 4, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)' }}
/>
</Grid>
</Grid>
</AccordionDetails>
</Accordion>
</Container>
<Container sx={{ marginBottom: 4 }}>
{problem ? (
<>
<Paper elevation={8} sx={{ padding: 2, marginBottom: 2 }}>
<Stack direction="row" justifyContent="space-between" sx={{ marginBottom: 1 }}>
<Button
variant="outlined"
color="info"
onClick={() => { navigate('/'); }}
>
Back
</Button>
<Typography variant="h4" component="h2" align="center" gutterBottom>
{problem.name}
</Typography>
<Button
type="button"
variant="outlined"
color="error"
onClick={() => {
// eslint-disable-next-line no-alert
if (window.confirm('Are you sure that you want to delete this ADF problem?')) {
deleteHandler();
}
}}
>
Delete
</Button>
</Stack>
<TextField
name="code"
label="Code"
helperText="Click here to copy!"
multiline
maxRows={5}
fullWidth
variant="filled"
value={problem.code.trim()}
disabled
sx={{ cursor: 'pointer' }}
onClick={() => { navigator.clipboard.writeText(problem.code); setSnackbarInfo({ message: 'Code copied to clipboard!', severity: 'info', potentialUserChange: false }); }}
/>
</Paper>
<Tabs
value={tab}
onChange={(_e, newTab) => { setTab(newTab); setSolutionIndex(0); }}
variant="scrollable"
scrollButtons="auto"
>
<Tab wrapped value="parse_only" label={<Chip color={acsWithGraphOptToColor(problem.acs_per_strategy.parse_only, problem.running_tasks.some((t: Task) => t.type === 'Parse'))} label={`${problem.parsing_used} Parsing`} sx={{ cursor: 'inherit' }} />} />
{STRATEGIES_WITHOUT_PARSE.map((strategy) => {
const spaced = strategy.replace(/([A-Za-z])([A-Z])/g, '$1 $2');
const snakeCase = strategy.replace(/^([A-Z])/, (_, p1) => p1.toLowerCase()).replace(/([A-Z])/g, (_, p1) => `_${p1.toLowerCase()}`) as StrategySnakeCase;
const status = problem.acs_per_strategy[snakeCase];
const running = problem.running_tasks.some((t: Task) => t.type === 'Solve' && t.content === strategy);
const color = acsWithGraphOptToColor(status, running);
return <Tab key={strategy} wrapped value={snakeCase} label={<Chip color={color} label={spaced} sx={{ cursor: 'inherit' }} />} />;
})}
</Tabs>
{acsContent && acsContent.length > 1 && (
<>
Models:
<br />
<Pagination variant="outlined" shape="rounded" count={acsContent.length} page={solutionIndex + 1} onChange={(_e, newIdx) => setSolutionIndex(newIdx - 1)} />
</>
)}
<Paper elevation={3} square sx={{ padding: 2, marginTop: 4, marginBottom: 4 }}>
{problem.running_tasks.some((t: Task) => (tab === 'parse_only' && t.type === 'Parse') || (t.type === 'Solve' && t.content === tabCamelCase)) ? (
<Alert severity="warning">Working hard to solve the problem right now...</Alert>
) : (
<>
{acsContent && acsContent.length > 0 && (
<GraphG6 graph={acsContent[solutionIndex].graph} />
)}
{acsContent && acsContent.length === 0 && (
<Alert severity="info">The problem has no models for this strategy.</Alert>
)}
{!acsContent && acsOpt?.type === 'Error' && (
<Alert severity="error">
An error occurred:
{acsOpt.content}
</Alert>
)}
{!acsContent && acsOpt?.type === 'None' && (
<>
<Alert severity="info" sx={{ marginBottom: 1 }}>This strategy was not attempted yet.</Alert>
<Button
variant="contained"
size="large"
color="warning"
onClick={() => {
solveHandler(tabCamelCase);
}}
>
Solve now!
</Button>
</>
)}
</>
)}
</Paper>
</>
) : (
<>
<Paper elevation={8} sx={{ padding: 2, marginBottom: 8 }}>
<Skeleton variant="text" width="50%" sx={{ fontSize: '2.125rem', margin: 'auto' }} />
<Skeleton variant="rounded" width="100%" height={200} />
</Paper>
<Skeleton variant="rectangular" width="100%" height={500} />
</>
)}
</Container>
</>
);
}
export default AdfDetails;

View File

@ -0,0 +1,187 @@
import React, {
useState, useContext, useCallback, useRef,
} from 'react';
import {
Button,
Container,
FormControl,
FormControlLabel,
FormLabel,
Link,
Paper,
Radio,
RadioGroup,
Stack,
Typography,
TextField,
ToggleButtonGroup,
ToggleButton,
} from '@mui/material';
import LoadingContext from './loading-context';
import SnackbarContext from './snackbar-context';
import { Parsing } from './adf-details';
const PLACEHOLDER = `s(a).
s(b).
s(c).
s(d).
ac(a,c(v)).
ac(b,b).
ac(c,and(a,b)).
ac(d,neg(b)).`;
function AdfNewForm({ fetchProblems }: { fetchProblems: () => void; }) {
const { setLoading } = useContext(LoadingContext);
const { setStatus: setSnackbarInfo } = useContext(SnackbarContext);
const [isFileUpload, setFileUpload] = useState(false);
const [code, setCode] = useState(PLACEHOLDER);
const [filename, setFilename] = useState('');
const [parsing, setParsing] = useState<Parsing>('Naive');
const [isAf, setIsAf] = useState(false);
const [name, setName] = useState('');
const fileRef = useRef<HTMLInputElement>(null);
const addAdf = useCallback(
() => {
setLoading(true);
const formData = new FormData();
if (isFileUpload && fileRef.current) {
const file = fileRef.current.files?.[0];
if (file) {
formData.append('file', file);
}
} else {
formData.append('code', code);
}
formData.append('parsing', parsing);
formData.append('is_af', isAf);
formData.append('name', name);
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/adf/add`, {
method: 'POST',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
body: formData,
})
.then((res) => {
switch (res.status) {
case 200:
setSnackbarInfo({ message: 'Successfully added ADF problem!', severity: 'success', potentialUserChange: true });
fetchProblems();
break;
default:
setSnackbarInfo({ message: 'An error occured while adding the ADF problem.', severity: 'error', potentialUserChange: true });
break;
}
})
.finally(() => setLoading(false));
},
[isFileUpload, code, filename, parsing, name, fileRef.current],
);
return (
<Container>
<Paper elevation={8} sx={{ padding: 2 }}>
<Typography variant="h4" component="h2" align="center" gutterBottom>
Add a new Problem
</Typography>
<Container sx={{ marginTop: 2, marginBottom: 2 }}>
<Stack direction="row" justifyContent="center">
<ToggleButtonGroup
value={isFileUpload}
exclusive
onChange={(_e, newValue) => { setFileUpload(newValue); setFilename(''); }}
>
<ToggleButton value={false}>
Write by Hand
</ToggleButton>
<ToggleButton value>
Upload File
</ToggleButton>
</ToggleButtonGroup>
</Stack>
</Container>
<Container sx={{ marginTop: 2, marginBottom: 2 }}>
{isFileUpload ? (
<Stack direction="row" justifyContent="center">
<Button component="label">
{(!!filename && fileRef?.current?.files?.[0]) ? `File '${filename.split(/[\\/]/).pop()}' selected! (Click to change)` : 'Upload File'}
<input hidden type="file" onChange={(event) => { setFilename(event.target.value); }} ref={fileRef} />
</Button>
</Stack>
) : (
<TextField
name="code"
label="Put your code here:"
helperText={(
<>
For more info on the ADF syntax, have a
look
{' '}
<Link href="https://github.com/ellmau/adf-obdd" target="_blank" rel="noopener noreferrer">here</Link>
. For the AF syntax, we currently only allow the ICCMA competition format, see for example
{' '}
<Link href="https://argumentationcompetition.org/2025/rules.html" target="_blank" rel="noopener noreferrer">here</Link>
.
</>
)}
multiline
fullWidth
variant="filled"
value={code}
onChange={(event) => { setCode(event.target.value); }}
/>
)}
</Container>
<Container sx={{ marginTop: 2 }}>
<Stack direction="row" justifyContent="center" spacing={2}>
<FormControl>
<FormLabel id="isAf-radio-group">ADF or AF?</FormLabel>
<RadioGroup
row
aria-labelledby="isAf-radio-group"
name="isAf"
value={isAf}
onChange={(e) => setIsAf(((e.target as HTMLInputElement).value))}
>
<FormControlLabel value={false} control={<Radio />} label="ADF" />
<FormControlLabel value={true} control={<Radio />} label="AF" />
</RadioGroup>
<span style={{ fontSize: "0.7em" }}>AFs are converted to ADFs internally.</span>
</FormControl>
<FormControl>
<FormLabel id="parsing-radio-group">Parsing Strategy</FormLabel>
<RadioGroup
row
aria-labelledby="parsing-radio-group"
name="parsing"
value={parsing}
onChange={(e) => setParsing(((e.target as HTMLInputElement).value) as Parsing)}
>
<FormControlLabel value="Naive" control={<Radio />} label="Naive" />
<FormControlLabel value="Hybrid" control={<Radio />} label="Hybrid" />
</RadioGroup>
</FormControl>
<TextField
name="name"
label="Adf Problem Name (optional):"
variant="standard"
value={name}
onChange={(event) => { setName(event.target.value); }}
/>
<Button variant="outlined" onClick={() => addAdf()}>Add Adf Problem</Button>
</Stack>
</Container>
</Paper>
</Container>
);
}
export default AdfNewForm;

View File

@ -0,0 +1,189 @@
import React, {
useRef, useState, useCallback, useEffect, useContext,
} from 'react';
import {
useNavigate,
} from 'react-router-dom';
import {
Accordion,
AccordionDetails,
AccordionSummary,
Chip,
Container,
Paper,
TableContainer,
Table,
TableHead,
TableRow,
TableCell,
TableBody,
Typography,
} from '@mui/material';
import ExpandMoreIcon from '@mui/icons-material/ExpandMore';
import AddInfoMd from 'bundle-text:../help-texts/add-info.md';
import Markdown from './markdown';
import AdfNewForm from './adf-new-form';
import {
AdfProblemInfo,
StrategySnakeCase,
STRATEGIES_WITHOUT_PARSE,
Task,
acsWithGraphOptToColor,
acsWithGraphOptToText,
} from './adf-details';
import SnackbarContext from './snackbar-context';
function AdfOverview() {
const { status: snackbarInfo } = useContext(SnackbarContext);
const [problems, setProblems] = useState<AdfProblemInfo[]>([]);
const navigate = useNavigate();
const isFirstRender = useRef(true);
const fetchProblems = useCallback(
() => {
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/adf/`, {
method: 'GET',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
})
.then((res) => {
switch (res.status) {
case 200:
res.json().then((resProblems) => {
setProblems(resProblems);
});
break;
case 401:
setProblems([]);
break;
default:
break;
}
});
},
[setProblems],
);
useEffect(
() => {
// TODO: having the info if the user may have changed on the snackbar info
// is a bit lazy and unclean; be better!
if (isFirstRender.current || snackbarInfo?.potentialUserChange) {
isFirstRender.current = false;
fetchProblems();
}
},
[snackbarInfo?.potentialUserChange],
);
useEffect(
() => {
// if there is a running task, fetch problems again after 20 seconds
let timeout: ReturnType<typeof setTimeout>;
if (problems.some((p) => p.running_tasks.length > 0)) {
timeout = setTimeout(() => fetchProblems(), 20000);
}
return () => {
if (timeout) {
clearTimeout(timeout);
}
};
},
[problems],
);
return (
<>
<Typography variant="h3" component="h1" align="center" gutterBottom>
ADF-BDD.DEV
</Typography>
<Container sx={{ marginTop: 2, marginBottom: 2 }}>
<Accordion>
<AccordionSummary expandIcon={<ExpandMoreIcon />}>
<span style={{ fontWeight: 'bold' }}>What is this webapp doing and how should I use it?</span>
</AccordionSummary>
<AccordionDetails>
<Markdown>{AddInfoMd}</Markdown>
</AccordionDetails>
</Accordion>
</Container>
{problems.length > 0
&& (
<Container sx={{ marginBottom: 4 }}>
<Paper elevation={8} sx={{ padding: 2 }}>
<Typography variant="h4" component="h2" align="center" gutterBottom>
Existing Problems
</Typography>
<TableContainer component={Paper}>
<Table>
<TableHead>
<TableRow>
<TableCell align="center">ADF Problem Name</TableCell>
<TableCell align="center">Parse Status</TableCell>
<TableCell align="center">Grounded Solution</TableCell>
<TableCell align="center">Complete Solution</TableCell>
<TableCell align="center">Stable Solution</TableCell>
<TableCell align="center">Stable Solution (Counting Method A)</TableCell>
<TableCell align="center">Stable Solution (Counting Method B)</TableCell>
<TableCell align="center">Stable Solution (Nogood-Based)</TableCell>
</TableRow>
</TableHead>
<TableBody>
{problems.map((problem) => (
<TableRow
key={problem.name}
onClick={() => { navigate(`/${problem.name}`); }}
sx={{ '&:last-child td, &:last-child th': { border: 0 }, cursor: 'pointer' }}
>
<TableCell component="th" scope="row">
{problem.name}
</TableCell>
{
(() => {
const status = problem.acs_per_strategy.parse_only;
const running = problem.running_tasks.some((t: Task) => t.type === 'Parse');
const color = acsWithGraphOptToColor(status, running);
const text = acsWithGraphOptToText(status, running);
return <TableCell align="center"><Chip color={color} label={`${text} (${problem.parsing_used} Parsing)`} sx={{ cursor: 'inherit' }} /></TableCell>;
})()
}
{
STRATEGIES_WITHOUT_PARSE.map((strategy) => {
const status = problem.acs_per_strategy[strategy.replace(/^([A-Z])/, (_, p1) => p1.toLowerCase()).replace(/([A-Z])/g, (_, p1) => `_${p1.toLowerCase()}`) as StrategySnakeCase];
const running = problem.running_tasks.some((t: Task) => t.type === 'Solve' && t.content === strategy);
const color = acsWithGraphOptToColor(status, running);
const text = acsWithGraphOptToText(status, running);
return <TableCell key={strategy} align="center"><Chip color={color} label={text} sx={{ cursor: 'inherit' }} /></TableCell>;
})
}
</TableRow>
))}
</TableBody>
</Table>
</TableContainer>
</Paper>
</Container>
)}
<AdfNewForm fetchProblems={fetchProblems} />
</>
);
}
export default AdfOverview;

View File

@ -0,0 +1,155 @@
import React, { useState, useMemo } from 'react';
import { createBrowserRouter, RouterProvider } from 'react-router-dom';
import { ThemeProvider, createTheme } from '@mui/material/styles';
import {
Alert,
AlertColor,
Backdrop,
Container,
CircularProgress,
CssBaseline,
Link,
Snackbar,
Stack,
useMediaQuery,
} from '@mui/material';
import LoadingContext from './loading-context';
import SnackbarContext from './snackbar-context';
import Footer from './footer';
import AdfOverview from './adf-overview';
import AdfDetails from './adf-details';
const browserRouter = createBrowserRouter([
{
path: '/',
element: <AdfOverview />,
},
{
path: '/:adfName',
element: <AdfDetails />,
},
]);
function App() {
const prefersDarkMode = useMediaQuery('(prefers-color-scheme: dark)');
const theme = useMemo(
() => createTheme({
palette: {
mode: prefersDarkMode ? 'dark' : 'light',
},
}),
[prefersDarkMode],
);
const [loading, setLoading] = useState(false);
const loadingContext = useMemo(() => ({ loading, setLoading }), [loading, setLoading]);
const [snackbarInfo, setSnackbarInfo] = useState<{
message: string,
severity: AlertColor,
potentialUserChange: boolean,
} | undefined>();
const snackbarContext = useMemo(
() => ({ status: snackbarInfo, setStatus: setSnackbarInfo }),
[snackbarInfo, setSnackbarInfo],
);
return (
<ThemeProvider theme={theme}>
<LoadingContext.Provider value={loadingContext}>
<SnackbarContext.Provider value={snackbarContext}>
<CssBaseline />
<main style={{ maxHeight: 'calc(100vh - 70px)', overflowY: 'auto' }}>
<RouterProvider router={browserRouter} />
<Container sx={{ marginTop: 4 }}>
<Stack direction="row" justifyContent="center" flexWrap="wrap">
<Link href="https://www.innosale.eu/" target="_blank" rel="noopener noreferrer">
<img
src={new URL('../innosale-logo.png', import.meta.url).toString()}
alt="InnoSale Logo"
height="40"
style={{
display: 'inline-block', borderRadius: 4, margin: 2, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)', padding: 8, background: '#FFFFFF',
}}
/>
</Link>
<Link href="https://scads.ai/" target="_blank" rel="noopener noreferrer">
<img
src={new URL('../scads-logo.png', import.meta.url).toString()}
alt="Scads.AI Logo"
height="40"
style={{
display: 'inline-block', borderRadius: 4, margin: 2, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)', padding: 2, background: '#FFFFFF',
}}
/>
</Link>
<Link href="https://secai.org/" target="_blank" rel="noopener noreferrer">
<img
src={new URL('../secai-logo.png', import.meta.url).toString()}
alt="Secai Logo"
height="40"
style={{
display: 'inline-block', borderRadius: 4, margin: 2, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)',
}}
/>
</Link>
<Link href="https://perspicuous-computing.science" target="_blank" rel="noopener noreferrer">
<img
src={new URL('../cpec-logo.png', import.meta.url).toString()}
alt="CPEC Logo"
height="40"
style={{
display: 'inline-block', borderRadius: 4, margin: 2, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)', padding: 8, background: '#FFFFFF',
}}
/>
</Link>
<Link href="https://iccl.inf.tu-dresden.de" target="_blank" rel="noopener noreferrer">
<img
src={new URL('../iccl-logo.png', import.meta.url).toString()}
alt="ICCL Logo"
height="40"
style={{
display: 'inline-block', borderRadius: 4, margin: 2, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)', padding: 4, background: '#FFFFFF',
}}
/>
</Link>
<Link href="https://tu-dresden.de" target="_blank" rel="noopener noreferrer">
<img
src={new URL('../tud-logo.png', import.meta.url).toString()}
alt="TU Dresden Logo"
height="40"
style={{
display: 'inline-block', borderRadius: 4, margin: 2, boxShadow: '0 0 5px 0 rgba(0,0,0,0.4)',
}}
/>
</Link>
</Stack>
</Container>
</main>
<Footer />
<Backdrop
open={loading}
>
<CircularProgress color="inherit" />
</Backdrop>
<Snackbar
open={!!snackbarInfo}
autoHideDuration={10000}
onClose={() => setSnackbarInfo(undefined)}
>
<Alert severity={snackbarInfo?.severity}>{snackbarInfo?.message}</Alert>
</Snackbar>
</SnackbarContext.Provider>
</LoadingContext.Provider>
</ThemeProvider>
);
}
export default App;

View File

@ -0,0 +1,247 @@
import React, {
useState, useCallback, useContext, useEffect, useRef,
} from 'react';
import {
AlertColor,
Alert,
AppBar,
Box,
Button,
Dialog,
DialogActions,
DialogContent,
DialogTitle,
Link,
TextField,
Toolbar,
} from '@mui/material';
import LoadingContext from './loading-context';
import SnackbarContext from './snackbar-context';
enum UserFormType {
Login = 'Login',
Register = 'Register',
Update = 'Update',
}
interface UserFormProps {
formType: UserFormType | null;
close: (message?: string, severity?: AlertColor) => void;
username?: string;
}
function UserForm({ username: propUsername, formType, close }: UserFormProps) {
const { setLoading } = useContext(LoadingContext);
const [username, setUsername] = useState<string>(propUsername || '');
const [password, setPassword] = useState<string>('');
const [errorOccurred, setError] = useState<boolean>(false);
const submitHandler = useCallback(
(del: boolean) => {
setLoading(true);
setError(false);
let method; let
endpoint;
if (del) {
method = 'DELETE';
endpoint = '/users/delete';
} else {
switch (formType) {
case UserFormType.Login:
method = 'POST';
endpoint = '/users/login';
break;
case UserFormType.Register:
method = 'POST';
endpoint = '/users/register';
break;
case UserFormType.Update:
method = 'PUT';
endpoint = '/users/update';
break;
default:
// NOTE: the value is not null when the dialog is open
break;
}
}
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}${endpoint}`, {
method,
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
body: !del ? JSON.stringify({ username, password }) : undefined,
})
.then((res) => {
switch (res.status) {
case 200:
close(`Action '${del ? 'Delete' : formType}' successful!`, 'success');
break;
default:
setError(true);
break;
}
})
.finally(() => setLoading(false));
},
[username, password, formType],
);
return (
<form onSubmit={(e) => { e.preventDefault(); submitHandler(false); }}>
<DialogTitle>{formType}</DialogTitle>
<DialogContent>
<TextField
variant="standard"
type="text"
label="Username"
value={username}
onChange={(event) => { setUsername(event.target.value); }}
/>
<br />
<TextField
variant="standard"
type="password"
label="Password"
value={password}
onChange={(event) => { setPassword(event.target.value); }}
/>
{errorOccurred
&& <Alert severity="error">Check your inputs!</Alert>}
</DialogContent>
<DialogActions>
<Button type="button" onClick={() => close()}>Cancel</Button>
<Button type="submit" variant="contained" color="success">{formType}</Button>
{formType === UserFormType.Update
// TODO: add another confirm dialog here
&& (
<Button
type="button"
variant="outlined"
color="error"
onClick={() => {
// eslint-disable-next-line no-alert
if (window.confirm('Are you sure that you want to delete your account?')) {
submitHandler(true);
}
}}
>
Delete Account
</Button>
)}
</DialogActions>
</form>
);
}
UserForm.defaultProps = { username: undefined };
function Footer() {
const { status: snackbarInfo, setStatus: setSnackbarInfo } = useContext(SnackbarContext);
const [username, setUsername] = useState<string>();
const [tempUser, setTempUser] = useState<boolean>();
const [dialogTypeOpen, setDialogTypeOpen] = useState<UserFormType | null>(null);
const isFirstRender = useRef(true);
const logout = useCallback(() => {
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/users/logout`, {
method: 'DELETE',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
})
.then((res) => {
switch (res.status) {
case 200:
setSnackbarInfo({ message: 'Logout successful!', severity: 'success', potentialUserChange: true });
setUsername(undefined);
break;
default:
setSnackbarInfo({ message: 'An error occurred while trying to log out.', severity: 'error', potentialUserChange: false });
break;
}
});
}, [setSnackbarInfo]);
useEffect(() => {
// TODO: having the info if the user may have changed on the snackbar info
// is a bit lazy and unclean; be better!
if (isFirstRender.current || snackbarInfo?.potentialUserChange) {
isFirstRender.current = false;
fetch(`${process.env.NODE_ENV === 'development' ? '//localhost:8080' : ''}/users/info`, {
method: 'GET',
credentials: process.env.NODE_ENV === 'development' ? 'include' : 'same-origin',
headers: {
'Content-Type': 'application/json',
},
})
.then((res) => {
switch (res.status) {
case 200:
res.json().then(({ username: user, temp }) => {
setUsername(user);
setTempUser(temp);
});
break;
default:
setUsername(undefined);
break;
}
});
}
}, [snackbarInfo?.potentialUserChange]);
return (
<>
<AppBar position="fixed" sx={{ top: 'auto', bottom: 0 }}>
<Toolbar sx={{ justifyContent: 'center', alignItems: 'center' }}>
<Box sx={{ flexGrow: 1 }}>
{username ? (
<>
<span>
Logged in as:
{' '}
{username}
{' '}
{tempUser ? '(Temporary User. Edit to set a password!)' : undefined}
</span>
<Button color="inherit" onClick={() => setDialogTypeOpen(UserFormType.Update)}>Edit</Button>
{!tempUser && <Button color="inherit" onClick={() => logout()}>Logout</Button>}
</>
) : (
<>
<Button color="inherit" onClick={() => setDialogTypeOpen(UserFormType.Login)}>Login</Button>
<Button color="inherit" onClick={() => setDialogTypeOpen(UserFormType.Register)}>Register</Button>
</>
)}
</Box>
<Link color="inherit" href="/legal.html" target="_blank" sx={{ fontSize: '0.8rem' }}>
Legal Information (Impressum and Data Protection Regulation)
</Link>
</Toolbar>
</AppBar>
<Dialog open={!!dialogTypeOpen} onClose={() => setDialogTypeOpen(null)}>
<UserForm
formType={dialogTypeOpen}
close={(message, severity) => {
setDialogTypeOpen(null);
setSnackbarInfo((!!message && !!severity)
? { message, severity, potentialUserChange: true }
: undefined);
}}
username={dialogTypeOpen === UserFormType.Update ? username : undefined}
/>
</Dialog>
</>
);
}
export default Footer;

View File

@ -0,0 +1,381 @@
import React, { useEffect, useRef } from 'react';
import G6, { Graph } from '@antv/g6';
G6.registerNode('nodeWithFlag', {
draw(cfg, group) {
const mainWidth = Math.max(30, 5 * (cfg!.mainLabel as string).length + 10);
const mainHeight = 30;
const keyShape = group!.addShape('rect', {
attrs: {
width: mainWidth,
height: mainHeight,
radius: 2,
fill: 'white',
stroke: 'black',
cursor: 'pointer',
},
name: 'rectMainLabel',
draggable: true,
});
group!.addShape('text', {
attrs: {
x: mainWidth / 2,
y: mainHeight / 2,
textAlign: 'center',
textBaseline: 'middle',
text: cfg!.mainLabel,
fill: '#212121',
fontFamily: 'Roboto',
cursor: 'pointer',
},
// must be assigned in G6 3.3 and later versions. it can be any value you want
name: 'textMailLabel',
// allow the shape to response the drag events
draggable: true,
});
if (cfg!.subLabel) {
const subWidth = 5 * (cfg!.subLabel as string).length + 4;
const subHeight = 20;
const subRectX = mainWidth - 4;
const subRectY = -subHeight + 4;
group!.addShape('rect', {
attrs: {
x: subRectX,
y: subRectY,
width: subWidth,
height: subHeight,
radius: 1,
fill: '#4caf50',
stroke: '#1b5e20',
cursor: 'pointer',
},
name: 'rectMainLabel',
draggable: true,
});
group!.addShape('text', {
attrs: {
x: subRectX + subWidth / 2,
y: subRectY + subHeight / 2,
textAlign: 'center',
textBaseline: 'middle',
text: cfg!.subLabel,
fill: '#212121',
fontFamily: 'Roboto',
fontSize: 10,
cursor: 'pointer',
},
// must be assigned in G6 3.3 and later versions. it can be any value you want
name: 'textMailLabel',
// allow the shape to response the drag events
draggable: true,
});
}
return keyShape;
},
getAnchorPoints() {
return [[0.5, 0], [0, 0.5], [1, 0.5], [0.5, 1]];
},
// nodeStateStyles: {
// hover: {
// fill: 'lightsteelblue',
// },
// highlight: {
// lineWidth: 3,
// },
// lowlight: {
// opacity: 0.3,
// },
// },
setState(name, value, item) {
if (!item) { return; }
const group = item.getContainer();
const mainShape = group.get('children')[0]; // Find the first graphics shape of the node. It is determined by the order of being added
const subShape = group.get('children')[2];
if (name === 'hover') {
if (value) {
mainShape.attr('fill', 'lightsteelblue');
} else {
mainShape.attr('fill', 'white');
}
}
if (name === 'highlight') {
if (value) {
mainShape.attr('lineWidth', 3);
} else {
mainShape.attr('lineWidth', 1);
}
}
if (name === 'lowlight') {
if (value) {
mainShape.attr('opacity', 0.3);
if (subShape) {
subShape.attr('opacity', 0.3);
}
} else {
mainShape.attr('opacity', 1);
if (subShape) {
subShape.attr('opacity', 1);
}
}
}
},
});
export interface GraphProps {
lo_edges: [string, string][],
hi_edges: [string, string][],
node_labels: { [key: string]: string },
tree_root_labels: { [key: string]: string[] },
}
function nodesAndEdgesFromGraphProps(graphProps: GraphProps) {
const nodes = Object.keys(graphProps.node_labels).map((id) => {
const mainLabel = graphProps.node_labels[id];
const subLabel = graphProps.tree_root_labels[id].length > 0 ? `Root for: ${graphProps.tree_root_labels[id].join(' ; ')}` : undefined;
// const label = subLabel.length > 0 ? `${mainLabel}\n${subLabel}` : mainLabel;
return {
id: id.toString(),
mainLabel,
subLabel,
// style: {
// height: subLabel.length > 0 ? 60 : 30,
// width: Math.max(30, 5 * mainLabel.length + 10, 5 * subLabel.length + 10),
// },
};
});
const edges = graphProps.lo_edges.map(([source, target]) => ({
id: `LO_${source}_${target}`, source: source.toString(), target: target.toString(), style: { stroke: '#ed6c02', lineWidth: 2 },
}))
.concat(graphProps.hi_edges.map(([source, target]) => ({
id: `HI_${source}_${target}`, source: source.toString(), target: target.toString(), style: { stroke: '#1976d2', lineWidth: 2 },
})));
return { nodes, edges };
}
interface Props {
graph: GraphProps,
}
function GraphG6(props: Props) {
const { graph: graphProps } = props;
const ref = useRef(null);
const graphRef = useRef<Graph>();
useEffect(
() => {
if (!graphRef.current) {
graphRef.current = new Graph({
container: ref.current!,
height: 800,
fitView: true,
modes: {
default: ['drag-canvas', 'zoom-canvas', 'drag-node'],
},
layout: {
type: 'dagre',
rankdir: 'BT',
},
// defaultNode: {
// anchorPoints: [[0.5, 0], [0, 0.5], [1, 0.5], [0.5, 1]],
// type: 'rect',
// style: {
// radius: 2,
// },
// labelCfg: {
// style: {
/// / fontWeight: 700,
// fontFamily: 'Roboto',
// },
// },
// },
defaultNode: { type: 'nodeWithFlag' },
defaultEdge: {
style: {
endArrow: true,
},
},
// nodeStateStyles: {
// hover: {
// fill: 'lightsteelblue',
// },
// highlight: {
// lineWidth: 3,
// },
// lowlight: {
// opacity: 0.3,
// },
// },
edgeStateStyles: {
lowlight: {
opacity: 0.3,
},
},
animate: true,
animateCfg: {
duration: 500,
easing: 'easePolyInOut',
},
});
}
const graph = graphRef.current;
// Mouse enter a node
graph.on('node:mouseenter', (e) => {
const nodeItem = e.item!; // Get the target item
graph.setItemState(nodeItem, 'hover', true); // Set the state 'hover' of the item to be true
});
// Mouse leave a node
graph.on('node:mouseleave', (e) => {
const nodeItem = e.item!; // Get the target item
graph.setItemState(nodeItem, 'hover', false); // Set the state 'hover' of the item to be false
});
},
[],
);
useEffect(
() => {
const graph = graphRef.current!;
// Click a node
graph.on('node:click', (e) => {
const nodeItem = e.item!; // et the clicked item
let onlyRemoveStates = false;
if (nodeItem.hasState('highlight')) {
onlyRemoveStates = true;
}
const clickNodes = graph.findAllByState('node', 'highlight');
clickNodes.forEach((cn) => {
graph.setItemState(cn, 'highlight', false);
});
const lowlightNodes = graph.findAllByState('node', 'lowlight');
lowlightNodes.forEach((cn) => {
graph.setItemState(cn, 'lowlight', false);
});
const lowlightEdges = graph.findAllByState('edge', 'lowlight');
lowlightEdges.forEach((cn) => {
graph.setItemState(cn, 'lowlight', false);
});
if (onlyRemoveStates) {
return;
}
graph.getNodes().forEach((node) => {
graph.setItemState(node, 'lowlight', true);
});
graph.getEdges().forEach((edge) => {
graph.setItemState(edge, 'lowlight', true);
});
const relevantNodeIds: string[] = [];
const relevantLoEdges: [string, string][] = [];
const relevantHiEdges: [string, string][] = [];
let newNodeIds: string[] = [nodeItem.getModel().id!];
let newLoEdges: [string, string][] = [];
let newHiEdges: [string, string][] = [];
while (newNodeIds.length > 0 || newLoEdges.length > 0 || newHiEdges.length > 0) {
relevantNodeIds.push(...newNodeIds);
relevantLoEdges.push(...newLoEdges);
relevantHiEdges.push(...newHiEdges);
newLoEdges = graphProps.lo_edges
.filter((edge) => relevantNodeIds.includes(edge[0].toString())
&& !relevantLoEdges.includes(edge));
newHiEdges = graphProps.hi_edges
.filter((edge) => relevantNodeIds.includes(edge[0].toString())
&& !relevantHiEdges.includes(edge));
newNodeIds = newLoEdges
.concat(newHiEdges)
.map((edge) => edge[1].toString())
.filter((id) => !relevantNodeIds.includes(id));
}
const relevantEdgeIds = relevantLoEdges
.map(([source, target]) => `LO_${source}_${target}`)
.concat(
relevantHiEdges
.map(([source, target]) => `HI_${source}_${target}`),
);
relevantNodeIds
.forEach((id) => {
graph.setItemState(id, 'lowlight', false);
graph.setItemState(id, 'highlight', true);
});
relevantEdgeIds
.forEach((id) => {
graph.setItemState(id, 'lowlight', false);
});
// graph.setItemState(nodeItem, 'lowlight', false);
// graph.setItemState(nodeItem, 'highlight', true);
// nodeItem.getEdges().forEach((edge) => {
// graph.setItemState(edge, 'lowlight', false);
// });
});
return () => { graph.off('node:click'); };
},
[graphProps],
);
useEffect(
() => {
const graph = graphRef.current!;
const { nodes, edges } = nodesAndEdgesFromGraphProps(graphProps);
graph.changeData({
nodes,
edges,
});
},
[graphProps],
);
return (
<>
<div ref={ref} style={{ overflow: 'hidden' }} />
<div style={{ padding: 4 }}>
<span style={{ color: '#ed6c02', marginRight: 8 }}>lo edge (condition is false)</span>
{' '}
<span style={{ color: '#1976d2', marginRight: 8 }}>hi edge (condition is true)</span>
{' '}
Click nodes to hightlight paths! (You can also drag and zoom.)
<br />
The
{' '}
<span style={{ color: '#4caf50' }}>Root for: X</span>
{' '}
labels indicate where to start looking to determine the truth value of statement X.
</div>
</>
);
}
export default GraphG6;

View File

@ -0,0 +1,13 @@
import { createContext } from 'react';
interface ILoadingContext {
loading: boolean;
setLoading: (loading: boolean) => void;
}
const LoadingContext = createContext<ILoadingContext>({
loading: false,
setLoading: () => {},
});
export default LoadingContext;

View File

@ -0,0 +1,58 @@
import React from 'react';
import ReactMarkdown from 'markdown-to-jsx';
import {
Box,
Link,
Typography,
} from '@mui/material';
const options = {
overrides: {
h1: {
component: Typography,
props: {
gutterBottom: true,
variant: 'h4',
},
},
h2: {
component: Typography,
props: { gutterBottom: true, variant: 'h6' },
},
h3: {
component: Typography,
props: { gutterBottom: true, variant: 'subtitle1' },
},
h4: {
component: Typography,
props: {
gutterBottom: true,
variant: 'caption',
paragraph: true,
},
},
p: {
component: Typography,
props: { paragraph: true, sx: { '&:last-child': { marginBottom: 0 } } },
},
a: {
component: (props: any) => (
// eslint-disable-next-line react/jsx-props-no-spreading
<Link target="_blank" rel="noopener noreferrer" {...props} />
),
},
li: {
component: (props: any) => (
<Box component="li" sx={{ mt: 1 }}>
{/* eslint-disable-next-line react/jsx-props-no-spreading */}
<Typography component="span" {...props} />
</Box>
),
},
},
};
export default function Markdown(props: any) {
// eslint-disable-next-line react/jsx-props-no-spreading
return <ReactMarkdown options={options} {...props} />;
}

View File

@ -0,0 +1,17 @@
import { createContext } from 'react';
import { AlertColor } from '@mui/material';
type Status = { message: string, severity: AlertColor, potentialUserChange: boolean } | undefined;
interface ISnackbarContext {
status: Status;
setStatus: (status: Status) => void;
}
const SnackbarContext = createContext<ISnackbarContext>({
status: undefined,
setStatus: () => {},
});
export default SnackbarContext;

BIN
frontend/src/cpec-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

View File

@ -0,0 +1,37 @@
ADF-BDD.dev allows you to solve Abstract Dialectical Frameworks (ADFs). The ADFs are represented as Binary Decision Diagrams (BDDs).
The Web UI mimics many options of the CLI version of the [underlying adf-bdd tool](https://github.com/ellmau/adf-obdd). The syntax for the ADF code is indentical.
In the below form, you can either type/paste your `code` or upload a file in the same format.
To put it briefly, an ADF consists of statements and accectance conditions for these statements.
For instance, the following code indicates that `a,b,c,d` are statements, that `a` is assumed to be true (verum), `b` is true if `b` is true (which is self-supporting), `c` is true if `a` and `b` are true, and `d` is true if `b` is false.
```
s(a).
s(b).
s(c).
s(d).
ac(a,c(v)).
ac(b,b).
ac(c,and(a,b)).
ac(d,neg(b)).
```
Internally, the ADF is respresented as a BDD.
The `Parsing Strategy` determines the internal implementation used for these. `Naive` uses the own BDD implementation of our tool. `Hybrid` mixes our approaches with the existing Rust BDD library [`biodivine`](https://crates.io/crates/biodivine-lib-bdd). Don't be concerned about this choice if you are new to this tool; just pick either one.
You will get a view on the BDD in the detail view after you added the problem.
You can optionally set a name for you ADF problem. Otherwise a random name will be chosen. At the moment the name cannot be changed later (but you could remove and re-add the problem).
We also support adding AFs in the ICCMA competition format. They are converted to ADFs internally in the obvious way.
For example you can try the following code and change the option below from ADF to AF.
```
p af 5
# this is a comment
1 2
2 4
4 5
5 4
5 5
```

View File

@ -0,0 +1,13 @@
First of all you can review the code that you added. You can also delete the problem if you made a mistake or do not need it anymore.
Further below, you can have a look at the BDD representations of your problem using different semantics.
In principle, each statement gets it's own BDD that indicates how its truth value can be obtained from the other ones. Note that every BDD has the `BOT` and `TOP` nodes ultimately indicating the truth value (false or true respectively).
All these individual BDDs are displayed in a merged representation where the `Root for:` labels tell you where to start looking if you want to
get the truth value of an individual statement.
For instance, consider a BDD that (besides `BOT` and `TOP`) only contains a node `b` annotated with `Root for: a` and the annotation `Root for: b` at the `TOP` node.
Since the root for `b` is the `TOP` node, we know that `b` must be true. Then, to obtain the truth value for `a`, we start at the `b` and since we know that `b` must be true, we can follow the blue edge to obtain the value for `a` (we will end up in `BOT` or `TOP` there). If `b` would be false, we would follow the orange edge analogously. Note that is not always possible to directly determine the truth values of statements (which is exactly why we need tools like this).
On the very left, you can view the initial representation of your problem after parsing. This also indicates the parsing strategy that you have chosen (`Naive` or `Hybrid`).
The other tabs allow you to solve the problem using different semantics and optimizations. Some of them (e.g. `complete`) may produce multiple models that you can cycle through.
To get a better idea of the differences, you can have a look at the [command line tool](https://github.com/ellmau/adf-obdd/tree/main/bin).

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

BIN
frontend/src/iccl-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

18
frontend/src/index.html Normal file
View File

@ -0,0 +1,18 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8"/>
<meta name="viewport" content="initial-scale=1, width=device-width" />
<title>ADF-OBDD Web Visualizer</title>
<script type="module" src="app.tsx"></script>
</head>
<body>
<noscript>
<h1>ADF-BDD.DEV</h1>
<p>Turn on Javascript in your browser to use our ADF tool!</p>
<a href="./legal.html" target="_blank">Legal Information (Impressum and Data Protection Regulation)</a>
</noscript>
<div id="app"></div>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

212
frontend/src/legal.html Normal file
View File

@ -0,0 +1,212 @@
<!doctype html>
<html>
<head>
<title>ADF-BDD.dev - Legal Notice</title>
<meta
name="description"
content="Impressum and Data Protection Regulation for adf-bdd.dev"
/>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<style>
body {
font-family: Helvetica;
}
h1 {
text-align: center;
}
section {
max-width: 1000px;
margin: 0 auto 32px;
padding: 16px;
box-shadow: 0 0 10px 0px rgba(0, 0, 0, 0.4);
}
section > :first-child {
margin-top: 0;
}
section > :last-child {
margin-bottom: 0;
}
</style>
</head>
<body>
<header>
<h1>ADF-BDD.DEV Legal Notice</h1>
</header>
<section>
<h2>Impressum</h2>
The
<a
href="https://tu-dresden.de/impressum?set_language=en"
target="_blank"
rel="noreferrer noopener"
>Impressum of TU Dresden</a
>
applies with the following amendments:
<h3>Responsibilities - Content and Technical Implementation</h3>
<p>
Dipl.-Inf. Lukas Gerlach<br />
Technische Universität Dresden<br />
Fakultät Informatik<br />
Institut für Theoretische Informatik<br />
Professur für Wissensbasierte Systeme<br />
01062 Dresden<br />
GERMANY
</p>
<p>
Email: lukas.gerlach@tu-dresden.de<br />
Phone: (+49) 351 / 463 43503
</p>
</section>
<section>
<h2>Data Protection Regulation</h2>
<p>
We process your personal data only in form of metadata that is
send to us when you access the website. This is done to pursue
our legitimate interest of providing and improving this publicly
available website (https://adf-bdd.dev). To this aim, this
metadata is also written to server log files. The data may
contain the following of your personal information: public IP
address, time of access, internet browser (e.g. user agent,
version), operating system, referrer url, hostname of requesting
machine. We only set cookies that are necessary for the
provision of our service, i.e. to check if a user is logged in.
</p>
<h3>
Data Processed for Website Provisioning and Log File Creation:
Log Files for Website Provisioning
</h3>
<p>
We use Cloudflare to resolve DNS requests for our website. To
ensure the security and performance of our website, we log
technical errors that may occur when accessing our website.
Additionally, information that your device's browser
automatically transmits to our server is collected. This
information includes:
</p>
<ul>
<li>IP address and operating system of your device,</li>
<li>Browser type, version, language,</li>
<li>
The website from which the access was made (referrer URL),
</li>
<li>The status code (e.g., 404), and</li>
<li>The transmission protocol used (e.g., http/2).</li>
</ul>
<p>
The processing of this data is based on our legitimate interest
according to Art. 6(1)(f) GDPR. Our legitimate interest lies in
troubleshooting, optimizing, and ensuring the performance of our
website, as well as guaranteeing the security of our network and
systems. We do not use the data to personally identify
individual users unless there is a legal reason to do so or
explicit consent is obtained from you.
</p>
<p>
Cloudflare acts as an intermediary between your browser and our
server. When a DNS record is set to "Proxied," Cloudflare
answers DNS queries with a Cloudflare Anycast IP address instead
of the actual IP address of our server. This directs HTTP/HTTPS
requests to the Cloudflare network, which offers advantages in
terms of security and performance. Cloudflare also hides the IP
address of our origin server, making it more difficult for
attackers to directly target it.
</p>
<p>
Cloudflare may store certain data related to DNS requests,
including IP addresses. However, Cloudflare anonymizes IP
addresses by truncating the last octets for IPv4 and the last 80
bits for IPv6. The truncated IP addresses are deleted within 25
hours. Cloudflare is committed to not selling or sharing users'
personal data with third parties and not using the data for
targeted advertising. For more information on data protection at
Cloudflare, please see the Cloudflare Privacy Policy:
<a href="https://www.cloudflare.com/de-de/privacypolicy/"
>https://www.cloudflare.com/de-de/privacypolicy/</a
>
</p>
<p>
To meet the requirements of the GDPR, we have entered into a
Data Processing Agreement (DPA) with Cloudflare, which ensures
that Cloudflare processes the data on our behalf and in
accordance with applicable data protection regulations. You have
the right to access, rectify, erase, restrict processing, and
data portability of your personal data. Please contact us if you
wish to exercise these rights.
</p>
<p>
Please note that our website is hosted on our own servers, and
Cloudflare merely serves as a DNS provider and proxy. We
implement appropriate technical and organizational measures to
ensure the protection of your data.
</p>
<h3>Legal basis</h3>
<p>
The legal basis for the data processing is
<a
href="https://gdpr.eu/article-6-how-to-process-personal-data-legally/"
target="_blank"
rel="noreferrer noopener"
>Section §6 para.1 lit. f GDPR</a
>.
</p>
<h3>Rights of data subjects</h3>
<ul>
<li>
You have the right to obtain information from TU Dresden
about the data stored about your person and/or to have
incorrectly stored data corrected.
</li>
<li>
You have the right to erasure or restriction of the
processing and/or a right to object to the processing.
</li>
<li>
You can contact TU Dresden's Data Protection Officer at any
time.
<p>
Tel.: +49 351 / 463 32839<br />
Fax: +49 351 / 463 39718<br />
Email: informationssicherheit@tu-dresden.de<br />
<a
href="https://tu-dresden.de/informationssicherheit"
target="_blank"
rel="noreferrer noopener"
>https://tu-dresden.de/informationssicherheit</a
>
</p>
</li>
<li>
You also have the right to complain to a supervisory
authority if you are concerned that the processing of your
personal data is an infringement of the law. The competent
supervisory authority for data protection is:
<p>
Saxon Data Protection Commissioner<br />
Ms. Dr. Juliane Hundert<br />
Maternistraße 17<br />
01067 Dresden<br />
Email: post@sdtb.sachsen.de<br />
Phone: + 49 351 / 85471 101<br />
<a
href="http://www.datenschutz.sachsen.de"
target="_blank"
rel="noreferrer noopener"
>www.datenschutz.sachsen.de</a
>
</p>
</li>
</ul>
</section>
</body>
</html>

BIN
frontend/src/scads-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

BIN
frontend/src/secai-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

19
frontend/src/test-data.ts Normal file
View File

@ -0,0 +1,19 @@
const testData = [
{ label: 'BOT', lo: 0, hi: 0 },
{ label: 'TOP', lo: 1, hi: 1 },
{ label: 'Var8', lo: 0, hi: 1 },
{ label: 'Var7', lo: 1, hi: 0 },
{ label: 'Var0', lo: 3, hi: 1 },
{ label: 'Var9', lo: 0, hi: 1 },
{ label: 'Var8', lo: 5, hi: 0 },
{ label: 'Var0', lo: 6, hi: 5 },
{ label: 'Var1', lo: 0, hi: 1 },
{ label: 'Var0', lo: 1, hi: 0 },
{ label: 'Var9', lo: 1, hi: 0 },
{ label: 'Var8', lo: 0, hi: 10 },
{ label: 'Var0', lo: 5, hi: 0 },
{ label: 'Var8', lo: 1, hi: 0 },
{ label: 'Var5', lo: 13, hi: 0 },
];
export default testData;

BIN
frontend/src/tud-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

103
frontend/tsconfig.json Normal file
View File

@ -0,0 +1,103 @@
{
"compilerOptions": {
/* Visit https://aka.ms/tsconfig to read more about this file */
/* Projects */
// "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */
// "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */
// "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */
// "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */
// "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */
// "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */
/* Language and Environment */
"target": "es2016", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */
// "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */
"jsx": "preserve", /* Specify what JSX code is generated. */
// "experimentalDecorators": true, /* Enable experimental support for TC39 stage 2 draft decorators. */
// "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */
// "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */
// "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */
// "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */
// "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */
// "noLib": true, /* Disable including any library files, including the default lib.d.ts. */
// "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */
// "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */
/* Modules */
"module": "esnext", /* Specify what module code is generated. */
// "rootDir": "./", /* Specify the root folder within your source files. */
"moduleResolution": "node", /* Specify how TypeScript looks up a file from a given module specifier. */
// "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */
// "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */
// "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */
// "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */
// "types": [], /* Specify type package names to be included without being referenced in a source file. */
// "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */
// "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */
// "resolveJsonModule": true, /* Enable importing .json files. */
// "noResolve": true, /* Disallow 'import's, 'require's or '<reference>'s from expanding the number of files TypeScript should add to a project. */
/* JavaScript Support */
// "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */
// "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */
// "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */
/* Emit */
// "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */
// "declarationMap": true, /* Create sourcemaps for d.ts files. */
// "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */
// "sourceMap": true, /* Create source map files for emitted JavaScript files. */
// "outFile": "./", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */
// "outDir": "./", /* Specify an output folder for all emitted files. */
// "removeComments": true, /* Disable emitting comments. */
// "noEmit": true, /* Disable emitting files from a compilation. */
// "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */
// "importsNotUsedAsValues": "remove", /* Specify emit/checking behavior for imports that are only used for types. */
// "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */
// "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */
// "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */
// "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */
// "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */
// "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */
// "newLine": "crlf", /* Set the newline character for emitting files. */
// "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */
// "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */
// "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */
// "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */
// "declarationDir": "./", /* Specify the output directory for generated declaration files. */
// "preserveValueImports": true, /* Preserve unused imported values in the JavaScript output that would otherwise be removed. */
/* Interop Constraints */
// "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */
// "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */
"esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */
// "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */
"forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */
/* Type Checking */
"strict": true, /* Enable all strict type-checking options. */
// "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */
// "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */
// "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */
// "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */
// "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */
// "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */
// "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */
// "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */
// "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */
// "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */
// "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */
// "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */
// "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */
// "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */
// "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */
// "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */
// "allowUnusedLabels": true, /* Disable error reporting for unused labels. */
// "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */
/* Completeness */
// "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */
"skipLibCheck": true /* Skip type checking all .d.ts files. */
}
}

4163
frontend/yarn.lock Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
[package]
name = "adf_bdd"
version = "0.2.4"
version = "0.3.1"
authors = ["Stefan Ellmauthaler <stefan.ellmauthaler@tu-dresden.de>"]
edition = "2021"
homepage = "https://ellmau.github.io/adf-obdd/"
@ -24,22 +24,29 @@ crate-type = ["lib"] # The crate types to generate.
[dependencies]
log = { version = "0.4"}
nom = "7.1.1"
nom = "7.1.3"
lexical-sort = "0.3.1"
serde = { version = "1.0", features = ["derive","rc"] }
serde_json = "1.0"
biodivine-lib-bdd = "0.3.0"
biodivine-lib-bdd = "0.5.0"
derivative = "2.2.0"
roaring = "0.10.1"
strum = { version = "0.24", features = ["derive"] }
crossbeam-channel = "0.5"
rand = {version = "0.8.5", features = ["std_rng"]}
[dev-dependencies]
test-log = "0.2"
env_logger = "0.9"
env_logger = "0.10"
quickcheck = "1"
quickcheck_macros = "1"
[features]
default = ["adhoccounting", "variablelist" ]
adhoccounting = [] # count models ad-hoc - disable if counting is not needed
default = ["adhoccounting", "variablelist", "frontend" ]
adhoccounting = [] # count paths ad-hoc - disable if counting is not needed
importexport = []
variablelist = [ "HashSet" ]
HashSet = []
adhoccountmodels = [ "adhoccounting" ] # count models as well as paths ad-hoc note that facet methods will need this feature too
benchmark = ["adhoccounting", "variablelist"] # set of features for speed benchmarks
frontend = []

View File

@ -1,4 +1,12 @@
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/ellmau/adf-obdd/Code%20coverage%20with%20tarpaulin) [![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd) ![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases) ![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd) [![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases) [![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
[![Crates.io](https://img.shields.io/crates/v/adf_bdd)](https://crates.io/crates/adf_bdd)
[![docs.rs](https://img.shields.io/docsrs/adf_bdd?label=docs.rs)](https://docs.rs/adf_bdd/latest/adf_bdd/)
![GitHub Workflow Status](https://img.shields.io/github/actions/workflow/status/ellmau/adf-obdd/codecov.yml?branch=main)
[![Coveralls](https://img.shields.io/coveralls/github/ellmau/adf-obdd)](https://coveralls.io/github/ellmau/adf-obdd)
![GitHub release (latest by date including pre-releases)](https://img.shields.io/github/v/release/ellmau/adf-obdd?include_prereleases)
![GitHub (Pre-)Release Date](https://img.shields.io/github/release-date-pre/ellmau/adf-obdd?label=release%20from) ![GitHub top language](https://img.shields.io/github/languages/top/ellmau/adf-obdd)
[![GitHub all releases](https://img.shields.io/github/downloads/ellmau/adf-obdd/total)](https://github.com/ellmau/adf-obdd/releases)
![Crates.io](https://img.shields.io/crates/l/adf_bdd)
[![GitHub Discussions](https://img.shields.io/github/discussions/ellmau/adf-obdd)](https://github.com/ellmau/adf-obdd/discussions) ![rust-edition](https://img.shields.io/badge/Rust--edition-2021-blue?logo=rust)
# Abstract Dialectical Frameworks solved by Binary Decision Diagrams; developed in Dresden (ADF-BDD)
This library contains an efficient representation of Abstract Dialectical Frameworks (ADf) by utilising an implementation of Ordered Binary Decision Diagrams (OBDD)
@ -113,3 +121,48 @@ for model in adf.complete() {
print!("{}", printer.print_interpretation(&model));
}
```
use the new `NoGood`-based algorithm and utilise the new interface with channels:
```rust
use adf_bdd::parser::AdfParser;
use adf_bdd::adf::Adf;
use adf_bdd::adf::heuristics::Heuristic;
use adf_bdd::datatypes::{Term, adf::VarContainer};
// create a channel
let (s, r) = crossbeam_channel::unbounded();
let variables = VarContainer::default();
let variables_worker = variables.clone();
// spawn a solver thread
let solving = std::thread::spawn(move || {
// use the above example as input
let input = "s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,or(a,b)).ac(c,neg(b)).ac(d,d).";
let parser = AdfParser::with_var_container(variables_worker);
parser.parse()(&input).expect("parsing worked well");
// use hybrid approach
let mut adf = adf_bdd::adfbiodivine::Adf::from_parser(&parser).hybrid_step();
// compute stable with the simple heuristic
adf.stable_nogood_channel(Heuristic::Simple, s);
});
let printer = variables.print_dictionary();
// print results as they are computed
while let Ok(result) = r.recv() {
print!("stable model: {:?} \n", result);
// use dictionary
print!("stable model with variable names: {}", printer.print_interpretation(&result));
# assert_eq!(result, vec![Term(1),Term(1),Term(0),Term(0)]);
}
// waiting for the other thread to close
solving.join().unwrap();
```
# Acknowledgements
This work is partly supported by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in projects number 389792660 (TRR 248, [Center for Perspicuous Systems](https://www.perspicuous-computing.science/)),
the Bundesministerium für Bildung und Forschung (BMBF, Federal Ministry of Education and Research) in the
[Center for Scalable Data Analytics and Artificial Intelligence](https://www.scads.de) (ScaDS.AI),
and by the [Center for Advancing Electronics Dresden](https://cfaed.tu-dresden.de) (cfaed).
# Affiliation
This work has been partly developed by the [Knowledge-Based Systems Group](http://kbs.inf.tu-dresden.de/), [Faculty of Computer Science](https://tu-dresden.de/ing/informatik) of [TU Dresden](https://tu-dresden.de).
# Disclaimer
Hosting content here does not establish any formal or legal relation to TU Dresden.

View File

@ -15,7 +15,7 @@ fn main() {
fn gen_tests() {
let out_dir = env::var("OUT_DIR").unwrap();
let destination = Path::new(&out_dir).join("tests.rs");
let mut test_file = File::create(&destination).unwrap();
let mut test_file = File::create(destination).unwrap();
if let Ok(test_data_directory) = read_dir("../res/adf-instances/instances/") {
// write test file header, put `use`, `const` etc there

View File

@ -5,7 +5,8 @@ This module describes the abstract dialectical framework.
- computing fixpoints
*/
use serde::{Deserialize, Serialize};
pub mod heuristics;
use std::cell::RefCell;
use crate::{
datatypes::{
@ -15,18 +16,28 @@ use crate::{
},
FacetCounts, ModelCounts, Term, Var,
},
nogoods::{NoGood, NoGoodStore},
obdd::Bdd,
parser::{AdfParser, Formula},
};
use rand::{rngs::StdRng, SeedableRng};
use serde::{Deserialize, Serialize};
use self::heuristics::Heuristic;
#[derive(Serialize, Deserialize, Debug)]
/// Representation of an ADF, with an ordering and dictionary which relates statements to numbers, a binary decision diagram, and a list of acceptance conditions in [`Term`][crate::datatypes::Term] representation.
///
/// Please note that due to the nature of the underlying reduced and ordered Bdd the concept of a [`Term`][crate::datatypes::Term] represents one (sub) formula as well as truth-values.
pub struct Adf {
ordering: VarContainer,
bdd: Bdd,
ac: Vec<Term>,
/// The ordering or the variables in the ADF including a dictionary for the statements
pub ordering: VarContainer,
/// The underlying binary decision diagram that respresents the ADF
pub bdd: Bdd,
/// Acceptance Conditions for the ADF
pub ac: Vec<Term>,
#[serde(skip, default = "Adf::default_rng")]
rng: RefCell<StdRng>,
}
impl Default for Adf {
@ -35,6 +46,18 @@ impl Default for Adf {
ordering: VarContainer::default(),
bdd: Bdd::new(),
ac: Vec::new(),
rng: Adf::default_rng(),
}
}
}
impl From<(VarContainer, Bdd, Vec<Term>)> for Adf {
fn from(source: (VarContainer, Bdd, Vec<Term>)) -> Self {
Self {
ordering: source.0,
bdd: source.1,
ac: source.2,
rng: Self::default_rng(),
}
}
}
@ -44,16 +67,12 @@ impl Adf {
pub fn from_parser(parser: &AdfParser) -> Self {
log::info!("[Start] instantiating BDD");
let mut result = Self {
ordering: VarContainer::from_parser(
parser.namelist_rc_refcell(),
parser.dict_rc_refcell(),
),
ordering: parser.var_container(),
bdd: Bdd::new(),
ac: vec![Term(0); parser.namelist_rc_refcell().as_ref().borrow().len()],
ac: vec![Term(0); parser.dict_size()],
rng: Adf::default_rng(),
};
(0..parser.namelist_rc_refcell().borrow().len())
.into_iter()
.for_each(|value| {
(0..parser.dict_size()).for_each(|value| {
log::trace!("adding variable {}", Var(value));
result.bdd.variable(Var(value));
});
@ -84,9 +103,10 @@ impl Adf {
bio_ac: &[biodivine_lib_bdd::Bdd],
) -> Self {
let mut result = Self {
ordering: VarContainer::copy(ordering),
ordering: ordering.clone(),
bdd: Bdd::new(),
ac: vec![Term(0); bio_ac.len()],
rng: Adf::default_rng(),
};
result
.ac
@ -131,9 +151,20 @@ impl Adf {
}
}
});
log::trace!("ordering: {:?}", result.ordering);
log::trace!("adf {:?} instantiated with bdd {}", result.ac, result.bdd);
result
}
fn default_rng() -> RefCell<StdRng> {
RefCell::new(StdRng::from_entropy())
}
/// Sets a cryptographiclly strong seed
pub fn seed(&mut self, seed: [u8; 32]) {
self.rng = RefCell::new(StdRng::from_seed(seed))
}
/// Instantiates a new ADF, based on a [biodivine adf][crate::adfbiodivine::Adf].
pub fn from_biodivine(bio_adf: &super::adfbiodivine::Adf) -> Self {
Self::from_biodivine_vector(bio_adf.var_container(), bio_adf.ac())
@ -405,6 +436,10 @@ impl Adf {
true
}
fn is_two_valued(&self, interpretation: &[Term]) -> bool {
interpretation.iter().all(|t| t.is_truth_value())
}
fn two_val_model_counts<H>(&mut self, interpr: &[Term], heuristic: H) -> Vec<Vec<Term>>
where
H: Fn(&Self, (Var, Term), (Var, Term), &[Term]) -> std::cmp::Ordering + Copy,
@ -590,6 +625,26 @@ impl Adf {
}
}
/// Constructs the fixpoint of the given interpretation with respect to the ADF.
/// sets _update_ to [`true`] if the value has been updated and to [`false`] otherwise.
fn update_interpretation_fixpoint_upd(
&mut self,
interpretation: &[Term],
update: &mut bool,
) -> Vec<Term> {
let mut cur_int = interpretation.to_vec();
*update = false;
loop {
let new_int = self.update_interpretation(interpretation);
if cur_int == new_int {
return cur_int;
} else {
cur_int = new_int;
*update = true;
}
}
}
fn update_interpretation(&mut self, interpretation: &[Term]) -> Vec<Term> {
self.apply_interpretation(interpretation, interpretation)
}
@ -681,7 +736,7 @@ impl Adf {
interpretation
.iter()
.map(|t| {
let mcs = self.bdd.models(*t, true);
let mcs = self.bdd.models(*t, false);
let n_vdps = { |t| self.bdd.var_dependencies(t).len() };
@ -697,11 +752,193 @@ impl Adf {
})
.collect::<Vec<_>>()
}
/// Computes the stable extensions of a given [`Adf`], using the [`NoGood`]-learner.
pub fn stable_nogood<'a, 'c>(
&'a mut self,
heuristic: Heuristic,
) -> impl Iterator<Item = Vec<Term>> + 'c
where
'a: 'c,
{
let grounded = self.grounded();
let heu = heuristic.get_heuristic();
let (s, r) = crossbeam_channel::unbounded::<Vec<Term>>();
self.stable_nogood_get_vec(&grounded, heu, s, r).into_iter()
}
/// Computes the stable extension of a given [`Adf`], using the [`NoGood`]-learner.
/// Needs a [`Sender`][crossbeam_channel::Sender<Vec<crate::datatypes::Term>>] where the results of the computation can be put to.
pub fn stable_nogood_channel(
&mut self,
heuristic: Heuristic,
sender: crossbeam_channel::Sender<Vec<Term>>,
) {
let grounded = self.grounded();
self.nogood_internal(
&grounded,
heuristic.get_heuristic(),
Self::stability_check,
sender,
);
}
/// Computes the two valued extension of a given [`Adf`], using the [`NoGood`]-learner.
/// Needs a [`Sender`][crossbeam_channel::Sender<Vec<crate::datatypes::Term>>] where the results of the computation can be put to.
pub fn two_val_nogood_channel(
&mut self,
heuristic: Heuristic,
sender: crossbeam_channel::Sender<Vec<Term>>,
) {
let grounded = self.grounded();
self.nogood_internal(
&grounded,
heuristic.get_heuristic(),
|_self: &mut Self, _int: &[Term]| true,
sender,
)
}
fn stable_nogood_get_vec<H>(
&mut self,
interpretation: &[Term],
heuristic: H,
s: crossbeam_channel::Sender<Vec<Term>>,
r: crossbeam_channel::Receiver<Vec<Term>>,
) -> Vec<Vec<Term>>
where
H: Fn(&Self, &[Term]) -> Option<(Var, Term)>,
{
self.nogood_internal(interpretation, heuristic, Self::stability_check, s);
r.iter().collect()
}
fn nogood_internal<H, I>(
&mut self,
interpretation: &[Term],
heuristic: H,
stability_check: I,
s: crossbeam_channel::Sender<Vec<Term>>,
) where
H: Fn(&Self, &[Term]) -> Option<(Var, Term)>,
I: Fn(&mut Self, &[Term]) -> bool,
{
let mut cur_interpr = interpretation.to_vec();
let mut ng_store = NoGoodStore::new(
self.ac
.len()
.try_into()
.expect("Expecting only u32 many statements"),
);
let mut stack: Vec<(bool, NoGood)> = Vec::new();
let mut interpr_history: Vec<Vec<Term>> = Vec::new();
let mut backtrack = false;
let mut update_ng;
let mut update_fp = false;
let mut choice = false;
log::debug!("start learning loop");
loop {
log::trace!("interpr: {:?}", cur_interpr);
log::trace!("choice: {}", choice);
if choice {
choice = false;
if let Some((var, term)) = heuristic(&*self, &cur_interpr) {
log::trace!("choose {}->{}", var, term.is_true());
interpr_history.push(cur_interpr.to_vec());
cur_interpr[var.value()] = term;
stack.push((true, cur_interpr.as_slice().into()));
} else {
backtrack = true;
}
}
update_ng = true;
log::trace!("backtrack: {}", backtrack);
if backtrack {
backtrack = false;
if stack.is_empty() {
break;
}
while let Some((choice, ng)) = stack.pop() {
log::trace!("adding ng: {:?}", ng);
ng_store.add_ng(ng);
if choice {
cur_interpr = interpr_history.pop().expect("both stacks (interpr_history and `stack`) should always be synchronous");
log::trace!(
"choice found, reverting interpretation to {:?}",
cur_interpr
);
break;
}
}
}
match ng_store.conclusion_closure(&cur_interpr) {
crate::nogoods::ClosureResult::Update(new_int) => {
cur_interpr = new_int;
log::trace!("ng update: {:?}", cur_interpr);
stack.push((false, cur_interpr.as_slice().into()));
}
crate::nogoods::ClosureResult::NoUpdate => {
log::trace!("no update");
update_ng = false;
}
crate::nogoods::ClosureResult::Inconsistent => {
log::trace!("inconsistency");
backtrack = true;
continue;
}
}
let ac_consistent_interpr = self.apply_interpretation(&self.ac.clone(), &cur_interpr);
log::trace!(
"checking consistency of {:?} against {:?}",
ac_consistent_interpr,
cur_interpr
);
if cur_interpr
.iter()
.zip(ac_consistent_interpr.iter())
.any(|(cur, ac)| {
cur.is_truth_value() && ac.is_truth_value() && cur.is_true() != ac.is_true()
})
{
log::trace!("ac_inconsistency");
backtrack = true;
continue;
}
cur_interpr = self.update_interpretation_fixpoint_upd(&cur_interpr, &mut update_fp);
if update_fp {
log::trace!("fixpount updated");
//stack.push((false, cur_interpr.as_slice().into()));
} else if !update_ng {
// No updates done this loop
if !self.is_two_valued(&cur_interpr) {
choice = true;
} else if stability_check(self, &cur_interpr) {
// stable model found
stack.push((false, cur_interpr.as_slice().into()));
s.send(cur_interpr.clone())
.expect("Sender should accept results");
backtrack = true;
} else {
// not stable
log::trace!("2 val not stable");
stack.push((false, cur_interpr.as_slice().into()));
backtrack = true;
}
}
}
log::info!("{ng_store}");
log::debug!("{:?}", ng_store);
}
}
#[cfg(test)]
mod test {
use super::*;
use crossbeam_channel::unbounded;
use test_log::test;
#[test]
@ -712,11 +949,16 @@ mod test {
parser.parse()(input).unwrap();
let adf = Adf::from_parser(&parser);
assert_eq!(adf.ordering.names().as_ref().borrow()[0], "a");
assert_eq!(adf.ordering.names().as_ref().borrow()[1], "c");
assert_eq!(adf.ordering.names().as_ref().borrow()[2], "b");
assert_eq!(adf.ordering.names().as_ref().borrow()[3], "e");
assert_eq!(adf.ordering.names().as_ref().borrow()[4], "d");
assert_eq!(adf.ordering.name(Var(0)), Some("a".to_string()));
assert_eq!(adf.ordering.names().read().unwrap()[0], "a");
assert_eq!(adf.ordering.name(Var(1)), Some("c".to_string()));
assert_eq!(adf.ordering.names().read().unwrap()[1], "c");
assert_eq!(adf.ordering.name(Var(2)), Some("b".to_string()));
assert_eq!(adf.ordering.names().read().unwrap()[2], "b");
assert_eq!(adf.ordering.name(Var(3)), Some("e".to_string()));
assert_eq!(adf.ordering.names().read().unwrap()[3], "e");
assert_eq!(adf.ordering.name(Var(4)), Some("d".to_string()));
assert_eq!(adf.ordering.names().read().unwrap()[4], "d");
assert_eq!(adf.ac, vec![Term(4), Term(2), Term(7), Term(15), Term(12)]);
@ -727,11 +969,11 @@ mod test {
parser.varsort_alphanum();
let adf = Adf::from_parser(&parser);
assert_eq!(adf.ordering.names().as_ref().borrow()[0], "a");
assert_eq!(adf.ordering.names().as_ref().borrow()[1], "b");
assert_eq!(adf.ordering.names().as_ref().borrow()[2], "c");
assert_eq!(adf.ordering.names().as_ref().borrow()[3], "d");
assert_eq!(adf.ordering.names().as_ref().borrow()[4], "e");
assert_eq!(adf.ordering.names().read().unwrap()[0], "a");
assert_eq!(adf.ordering.names().read().unwrap()[1], "b");
assert_eq!(adf.ordering.names().read().unwrap()[2], "c");
assert_eq!(adf.ordering.names().read().unwrap()[3], "d");
assert_eq!(adf.ordering.names().read().unwrap()[4], "e");
assert_eq!(adf.ac, vec![Term(3), Term(7), Term(2), Term(11), Term(13)]);
}
@ -882,6 +1124,159 @@ mod test {
assert_eq!(adf.stable_count_optimisation_heu_b().next(), None);
}
#[test]
fn stable_nogood() {
let parser = AdfParser::default();
parser.parse()("s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,b).ac(c,and(a,b)).ac(d,neg(b)).\ns(e).ac(e,and(b,or(neg(b),c(f)))).s(f).\n\nac(f,xor(a,e)).")
.unwrap();
let mut adf = Adf::from_parser(&parser);
let grounded = adf.grounded();
let (s, r) = unbounded();
adf.nogood_internal(&grounded, heuristics::heu_simple, Adf::stability_check, s);
assert_eq!(
r.iter().collect::<Vec<_>>(),
vec![vec![
Term::TOP,
Term::BOT,
Term::BOT,
Term::TOP,
Term::BOT,
Term::TOP
]]
);
let mut stable_iter = adf.stable_nogood(Heuristic::Simple);
assert_eq!(
stable_iter.next(),
Some(vec![
Term::TOP,
Term::BOT,
Term::BOT,
Term::TOP,
Term::BOT,
Term::TOP
])
);
assert_eq!(stable_iter.next(), None);
let parser = AdfParser::default();
parser.parse()("s(a).s(b).ac(a,neg(b)).ac(b,neg(a)).").unwrap();
let mut adf = Adf::from_parser(&parser);
let grounded = adf.grounded();
let (s, r) = unbounded();
adf.nogood_internal(
&grounded,
heuristics::heu_simple,
Adf::stability_check,
s.clone(),
);
let stable_result = r.try_iter().collect::<Vec<_>>();
assert_eq!(
stable_result,
vec![vec![Term(1), Term(0)], vec![Term(0), Term(1)]]
);
let stable = adf.stable_nogood(Heuristic::Simple);
assert_eq!(
stable.collect::<Vec<_>>(),
vec![vec![Term(1), Term(0)], vec![Term(0), Term(1)]]
);
let stable = adf.stable_nogood(Heuristic::Custom(&|_adf, interpr| {
for (idx, term) in interpr.iter().enumerate() {
if !term.is_truth_value() {
return Some((Var(idx), Term::BOT));
}
}
None
}));
assert_eq!(
stable.collect::<Vec<_>>(),
vec![vec![Term(0), Term(1)], vec![Term(1), Term(0)]]
);
adf.stable_nogood_channel(Heuristic::default(), s);
assert_eq!(
r.iter().collect::<Vec<_>>(),
vec![vec![Term(1), Term(0)], vec![Term(0), Term(1)]]
);
// multi-threaded usage
let (s, r) = unbounded();
let solving = std::thread::spawn(move || {
let parser = AdfParser::default();
parser.parse()("s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,b).ac(c,and(a,b)).ac(d,neg(b)).\ns(e).ac(e,and(b,or(neg(b),c(f)))).s(f).\n\nac(f,xor(a,e)).")
.unwrap();
let mut adf = Adf::from_parser(&parser);
adf.stable_nogood_channel(Heuristic::MinModMaxVarImpMinPaths, s.clone());
adf.stable_nogood_channel(Heuristic::MinModMinPathsMaxVarImp, s.clone());
adf.two_val_nogood_channel(Heuristic::Simple, s)
});
let mut result_vec = Vec::new();
while let Ok(result) = r.recv() {
result_vec.push(result);
}
assert_eq!(
result_vec,
vec![
vec![
Term::TOP,
Term::BOT,
Term::BOT,
Term::TOP,
Term::BOT,
Term::TOP
],
vec![
Term::TOP,
Term::BOT,
Term::BOT,
Term::TOP,
Term::BOT,
Term::TOP
],
vec![
Term::TOP,
Term::TOP,
Term::TOP,
Term::BOT,
Term::BOT,
Term::TOP
],
vec![
Term::TOP,
Term::BOT,
Term::BOT,
Term::TOP,
Term::BOT,
Term::TOP
],
]
);
solving.join().unwrap();
}
#[test]
fn rand_stable_heu() {
let parser = AdfParser::default();
parser.parse()("s(a).s(b).ac(a,neg(b)).ac(b,neg(a)).").unwrap();
let mut adf = Adf::from_parser(&parser);
let result = adf.stable_nogood(Heuristic::Rand).collect::<Vec<_>>();
assert!(result.contains(&vec![Term(0), Term(1)]));
assert!(result.contains(&vec![Term(1), Term(0)]));
assert_eq!(result.len(), 2);
let mut adf = Adf::from_parser(&parser);
adf.seed([
122, 186, 240, 42, 235, 102, 89, 81, 187, 203, 127, 188, 167, 198, 126, 156, 25, 205,
204, 132, 112, 93, 23, 193, 21, 108, 166, 231, 158, 250, 128, 135,
]);
let result = adf.stable_nogood(Heuristic::Rand).collect::<Vec<_>>();
assert_eq!(result, vec![vec![Term(1), Term(0)], vec![Term(0), Term(1)]]);
}
#[test]
fn complete() {
let parser = AdfParser::default();
@ -924,6 +1319,7 @@ mod test {
}
}
#[cfg(feature = "adhoccountmodels")]
#[test]
fn formulacounts() {
let parser = AdfParser::default();

162
lib/src/adf/heuristics.rs Normal file
View File

@ -0,0 +1,162 @@
/*!
This module contains all the crate-wide defined heuristic functions.
In addition there is the public enum [Heuristic], which allows to set a heuristic function with the public API.
*/
use super::Adf;
use crate::datatypes::{Term, Var};
use rand::{Rng, RngCore};
use strum::{EnumString, EnumVariantNames};
/// Return value for heuristics.
pub type RetVal = Option<(Var, Term)>;
/// Signature for heuristics functions.
pub type HeuristicFn = dyn Fn(&Adf, &[Term]) -> RetVal + Sync;
pub(crate) fn heu_simple(_adf: &Adf, interpr: &[Term]) -> Option<(Var, Term)> {
for (idx, term) in interpr.iter().enumerate() {
if !term.is_truth_value() {
return Some((Var(idx), Term::TOP));
}
}
None
}
pub(crate) fn heu_mc_minpaths_maxvarimp(adf: &Adf, interpr: &[Term]) -> Option<(Var, Term)> {
interpr
.iter()
.enumerate()
.filter(|(_var, term)| !term.is_truth_value())
.min_by(|(vara, &terma), (varb, &termb)| {
match adf
.bdd
.paths(terma, true)
.minimum()
.cmp(&adf.bdd.paths(termb, true).minimum())
{
std::cmp::Ordering::Equal => adf
.bdd
.passive_var_impact(Var::from(*vara), interpr)
.cmp(&adf.bdd.passive_var_impact(Var::from(*varb), interpr)),
value => value,
}
})
.map(|(var, term)| {
(
Var::from(var),
adf.bdd.paths(*term, true).more_models().into(),
)
})
}
pub(crate) fn heu_mc_maxvarimp_minpaths(adf: &Adf, interpr: &[Term]) -> Option<(Var, Term)> {
interpr
.iter()
.enumerate()
.filter(|(_var, term)| !term.is_truth_value())
.min_by(|(vara, &terma), (varb, &termb)| {
match adf
.bdd
.passive_var_impact(Var::from(*vara), interpr)
.cmp(&adf.bdd.passive_var_impact(Var::from(*varb), interpr))
{
std::cmp::Ordering::Equal => adf
.bdd
.paths(terma, true)
.minimum()
.cmp(&adf.bdd.paths(termb, true).minimum()),
value => value,
}
})
.map(|(var, term)| {
(
Var::from(var),
adf.bdd.paths(*term, true).more_models().into(),
)
})
}
pub(crate) fn heu_rand(adf: &Adf, interpr: &[Term]) -> Option<(Var, Term)> {
let possible = interpr
.iter()
.enumerate()
.filter(|(_var, term)| !term.is_truth_value())
.collect::<Vec<_>>();
if possible.is_empty() {
return None;
}
let mut rng = adf.rng.borrow_mut();
if let Ok(position) = usize::try_from(rng.next_u64() % (possible.len() as u64)) {
Some((Var::from(position), rng.gen_bool(0.5).into()))
} else {
None
}
}
/// Enumeration of all currently implemented heuristics.
/// It represents a public view on the crate-view implementations of heuristics.
#[derive(EnumString, EnumVariantNames, Copy, Clone)]
pub enum Heuristic<'a> {
/// Implementation of a simple heuristic.
/// This will just take the first not decided variable and maps it value to (`true`)[Term::TOP].
Simple,
/// Implementation of a heuristic, which which uses minimal number of [paths][crate::obdd::Bdd::paths] and maximal [variable-impact][crate::obdd::Bdd::passive_var_impact to identify the variable to be set.
/// As the value of the variable value with the maximal model-path is chosen.
MinModMinPathsMaxVarImp,
/// Implementation of a heuristic, which which uses maximal [variable-impact][crate::obdd::Bdd::passive_var_impact] and minimal number of [paths][crate::obdd::Bdd::paths] to identify the variable to be set.
/// As the value of the variable value with the maximal model-path is chosen.
MinModMaxVarImpMinPaths,
/// Implementation of a heuristic, which chooses random values.
Rand,
/// Allow passing in an externally-defined custom heuristic.
#[strum(disabled)]
Custom(&'a HeuristicFn),
}
impl Default for Heuristic<'_> {
fn default() -> Self {
Self::Simple
}
}
impl std::fmt::Debug for Heuristic<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::Simple => write!(f, "Simple"),
Self::MinModMinPathsMaxVarImp => write!(f, "Maximal model-path count as value and minimum paths with maximal variable impact as variable choice"),
Self::MinModMaxVarImpMinPaths => write!(f, "Maximal model-path count as value and maximal variable impact with minimum paths as variable choice"),
Self::Rand => write!(f, "Random heuristics"),
Self::Custom(_) => f.debug_tuple("Custom function").finish(),
}
}
}
impl Heuristic<'_> {
pub(crate) fn get_heuristic(&self) -> &(dyn Fn(&Adf, &[Term]) -> RetVal + '_) {
match self {
Heuristic::Simple => &heu_simple,
Heuristic::MinModMinPathsMaxVarImp => &heu_mc_minpaths_maxvarimp,
Heuristic::MinModMaxVarImpMinPaths => &heu_mc_maxvarimp_minpaths,
Heuristic::Rand => &heu_rand,
Self::Custom(f) => f,
}
}
}
#[cfg(test)]
mod test {
use super::*;
use crate::datatypes::Term;
use crate::datatypes::Var;
#[test]
fn debug_out() {
dbg!(Heuristic::Simple);
dbg!(Heuristic::MinModMaxVarImpMinPaths);
dbg!(Heuristic::MinModMinPathsMaxVarImp);
dbg!(Heuristic::Rand);
dbg!(Heuristic::Custom(&|_adf: &Adf,
_int: &[Term]|
-> Option<(Var, Term)> { None }));
}
}

View File

@ -5,6 +5,7 @@
//! - grounded
//! - stable
//! - complete
//!
//! semantics of ADFs.
use crate::{
@ -40,19 +41,17 @@ impl Adf {
pub fn from_parser(parser: &AdfParser) -> Self {
log::info!("[Start] instantiating BDD");
let mut bdd_var_builder = biodivine_lib_bdd::BddVariableSetBuilder::new();
let namelist = parser.namelist_rc_refcell().as_ref().borrow().clone();
let namelist = parser
.namelist()
.read()
.expect("ReadLock on namelist failed")
.clone();
let slice_vec: Vec<&str> = namelist.iter().map(<_>::as_ref).collect();
bdd_var_builder.make_variables(&slice_vec);
let bdd_variables = bdd_var_builder.build();
let mut result = Self {
ordering: VarContainer::from_parser(
parser.namelist_rc_refcell(),
parser.dict_rc_refcell(),
),
ac: vec![
bdd_variables.mk_false();
parser.namelist_rc_refcell().as_ref().borrow().len()
],
ordering: parser.var_container(),
ac: vec![bdd_variables.mk_false(); parser.dict_size()],
vars: bdd_variables.variables(),
varset: bdd_variables,
rewrite: None,
@ -91,7 +90,7 @@ impl Adf {
pub(crate) fn stm_rewriting(&mut self, parser: &AdfParser) {
let expr = parser.formula_order().iter().enumerate().fold(
biodivine_lib_bdd::boolean_expression::BooleanExpression::Const(true),
BooleanExpression::Const(true),
|acc, (insert_order, new_order)| {
BooleanExpression::And(
Box::new(acc),
@ -319,22 +318,19 @@ impl Adf {
.collect::<Vec<Vec<Term>>>()
}
/// compute the stable representation
fn stable_representation(&self) -> Bdd {
log::debug!("[Start] stable representation rewriting");
self.ac.iter().enumerate().fold(
self.varset.eval_expression(
&biodivine_lib_bdd::boolean_expression::BooleanExpression::Const(true),
),
self.varset.eval_expression(&BooleanExpression::Const(true)),
|acc, (idx, formula)| {
acc.and(
&formula.iff(
&self.varset.eval_expression(
&biodivine_lib_bdd::boolean_expression::BooleanExpression::Variable(
&self.varset.eval_expression(&BooleanExpression::Variable(
self.ordering
.name(crate::datatypes::Var(idx))
.expect("Variable should exist"),
),
),
)),
),
)
},
@ -387,7 +383,7 @@ pub trait BddRestrict {
impl BddRestrict for Bdd {
fn var_restrict(&self, variable: biodivine_lib_bdd::BddVariable, value: bool) -> Bdd {
self.var_select(variable, value).var_project(variable)
self.var_select(variable, value).var_exists(variable)
}
fn restrict(&self, variables: &[(biodivine_lib_bdd::BddVariable, bool)]) -> Bdd {
@ -395,7 +391,7 @@ impl BddRestrict for Bdd {
variables
.iter()
.for_each(|(var, _val)| variablelist.push(*var));
self.select(variables).project(&variablelist)
self.select(variables).exists(&variablelist)
}
}
@ -431,8 +427,8 @@ mod test {
let c = variables.eval_expression_string("c");
let d = variables.eval_expression_string("a & b & c");
let e = variables.eval_expression_string("a ^ b");
let t = variables.eval_expression(&boolean_expression::BooleanExpression::Const(true));
let f = variables.eval_expression(&boolean_expression::BooleanExpression::Const(false));
let t = variables.eval_expression(&BooleanExpression::Const(true));
let f = variables.eval_expression(&BooleanExpression::Const(false));
println!("{:?}", a.to_string());
println!("{:?}", a.to_bytes());

View File

@ -2,49 +2,69 @@
use super::{Term, Var};
use serde::{Deserialize, Serialize};
use std::{cell::RefCell, collections::HashMap, fmt::Display, rc::Rc};
use std::{collections::HashMap, fmt::Display, sync::Arc, sync::RwLock};
#[derive(Serialize, Deserialize, Debug)]
pub(crate) struct VarContainer {
names: Rc<RefCell<Vec<String>>>,
mapping: Rc<RefCell<HashMap<String, usize>>>,
/// A container which acts as a dictionary as well as an ordering of variables.
/// *names* is a list of variable-names and the sequence of the values is inducing the order of variables.
/// *mapping* allows to search for a variable name and to receive the corresponding position in the variable list (`names`).
///
/// # Important note
/// If one [VarContainer] is used to instantiate an [Adf][crate::adf::Adf] (resp. [Biodivine Adf][crate::adfbiodivine::Adf]) a revision (other than adding more information) might result in wrong variable-name mapping when trying to print the output using the [PrintDictionary].
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct VarContainer {
names: Arc<RwLock<Vec<String>>>,
mapping: Arc<RwLock<HashMap<String, usize>>>,
}
impl Default for VarContainer {
fn default() -> Self {
VarContainer {
names: Rc::new(RefCell::new(Vec::new())),
mapping: Rc::new(RefCell::new(HashMap::new())),
names: Arc::new(RwLock::new(Vec::new())),
mapping: Arc::new(RwLock::new(HashMap::new())),
}
}
}
impl VarContainer {
/// Create [`VarContainer`] from its components
pub fn from_parser(
names: Rc<RefCell<Vec<String>>>,
mapping: Rc<RefCell<HashMap<String, usize>>>,
names: Arc<RwLock<Vec<String>>>,
mapping: Arc<RwLock<HashMap<String, usize>>>,
) -> VarContainer {
VarContainer { names, mapping }
}
pub fn copy(from: &Self) -> Self {
VarContainer {
names: from.names.clone(),
mapping: from.mapping.clone(),
}
}
/// Get the [Var] used by the `Bdd` which corresponds to the given [&str].
/// Returns [None] if no matching value is found.
pub fn variable(&self, name: &str) -> Option<Var> {
self.mapping.borrow().get(name).map(|val| Var(*val))
self.mapping
.read()
.ok()
.and_then(|map| map.get(name).map(|val| Var(*val)))
}
/// Get the name which corresponds to the given [Var].
/// Returns [None] if no matching value is found.
pub fn name(&self, var: Var) -> Option<String> {
self.names.borrow().get(var.value()).cloned()
self.names
.read()
.ok()
.and_then(|name| name.get(var.value()).cloned())
}
#[allow(dead_code)]
pub fn names(&self) -> Rc<RefCell<Vec<String>>> {
Rc::clone(&self.names)
/// Return ordered names from [`VarContainer`]
pub fn names(&self) -> Arc<RwLock<Vec<String>>> {
Arc::clone(&self.names)
}
/// Return map from names to indices in [`VarContainer`]
pub fn mappings(&self) -> Arc<RwLock<HashMap<String, usize>>> {
Arc::clone(&self.mapping)
}
/// Creates a [PrintDictionary] for output purposes.
pub fn print_dictionary(&self) -> PrintDictionary {
PrintDictionary::new(self)
}
}
/// A struct which holds the dictionary to print interpretations and allows to instantiate printable interpretations.
@ -56,7 +76,7 @@ pub struct PrintDictionary {
impl PrintDictionary {
pub(crate) fn new(order: &VarContainer) -> Self {
Self {
ordering: VarContainer::copy(order),
ordering: order.clone(),
}
}
/// creates a [PrintableInterpretation] for output purposes
@ -131,7 +151,7 @@ impl TwoValuedInterpretationsIterator {
let indexes = term
.iter()
.enumerate()
.filter_map(|(idx, &v)| (!v.is_truth_value()).then(|| idx))
.filter_map(|(idx, &v)| (!v.is_truth_value()).then_some(idx))
.rev()
.collect::<Vec<_>>();
let current = term
@ -195,7 +215,7 @@ impl ThreeValuedInterpretationsIterator {
let indexes = term
.iter()
.enumerate()
.filter_map(|(idx, &v)| (!v.is_truth_value()).then(|| idx))
.filter_map(|(idx, &v)| (!v.is_truth_value()).then_some(idx))
.rev()
.collect::<Vec<_>>();
let current = vec![2; indexes.len()];

View File

@ -25,6 +25,16 @@ impl From<usize> for Term {
}
}
impl From<bool> for Term {
fn from(val: bool) -> Self {
if val {
Self::TOP
} else {
Self::BOT
}
}
}
impl From<&biodivine_lib_bdd::Bdd> for Term {
fn from(val: &biodivine_lib_bdd::Bdd) -> Self {
if val.is_true() {
@ -135,7 +145,7 @@ impl Var {
/// Intuitively this is a binary tree structure, where the diagram is allowed to
/// pool same values to the same Node.
#[derive(Debug, Eq, PartialEq, PartialOrd, Ord, Hash, Clone, Copy, Serialize, Deserialize)]
pub(crate) struct BddNode {
pub struct BddNode {
var: Var,
lo: Term,
hi: Term,
@ -147,6 +157,12 @@ impl Display for BddNode {
}
}
impl Default for BddNode {
fn default() -> Self {
Self::top_node()
}
}
impl BddNode {
/// Creates a new Node.
pub fn new(var: Var, lo: Term, hi: Term) -> Self {
@ -259,8 +275,8 @@ mod test {
let term: Term = Term::from(value);
let var = Var::from(value);
// display
assert_eq!(format!("{}", term), format!("Term({})", value));
assert_eq!(format!("{}", var), format!("Var({})", value));
assert_eq!(format!("{term}"), format!("Term({})", value));
assert_eq!(format!("{var}"), format!("Var({})", value));
//deref
assert_eq!(value, *term);
true

View File

@ -160,11 +160,188 @@ for model in adf.complete() {
print!("{}", printer.print_interpretation(&model));
}
```
### Using the [`NoGood`][crate::nogoods::NoGood]-learner approach, together with the [`crossbeam-channel`] implementation
This can be used to have a worker and a consumer thread to print the results as they are computed.
Please note that the [`NoGood`][crate::nogoods::NoGood]-learner needs a heuristics function to work.
The enum [`Heuristic`][crate::adf::heuristics::Heuristic] allows one to choose a pre-defined heuristic, or implement a `Custom` one.
```rust
use adf_bdd::parser::AdfParser;
use adf_bdd::adf::Adf;
use adf_bdd::adf::heuristics::Heuristic;
use adf_bdd::datatypes::{Term, adf::VarContainer};
// create a channel
let (s, r) = crossbeam_channel::unbounded();
let variables = VarContainer::default();
let variables_worker = variables.clone();
// spawn a solver thread
let solving = std::thread::spawn(move || {
// use the above example as input
let input = "s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,or(a,b)).ac(c,neg(b)).ac(d,d).";
let parser = AdfParser::with_var_container(variables_worker);
parser.parse()(&input).expect("parsing worked well");
// use hybrid approach
let mut adf = adf_bdd::adfbiodivine::Adf::from_parser(&parser).hybrid_step();
// compute stable with the simple heuristic
adf.stable_nogood_channel(Heuristic::Simple, s);
});
let printer = variables.print_dictionary();
// print results as they are computed
while let Ok(result) = r.recv() {
print!("stable model: {:?} \n", result);
// use dictionary
print!("stable model with variable names: {}", printer.print_interpretation(&result));
# assert_eq!(result, vec![Term(1),Term(1),Term(0),Term(0)]);
}
// waiting for the other thread to close
solving.join().unwrap();
```
### Serialize and Deserialize custom datastructures representing an [`adf::Adf`]
The Web Application <https://adf-bdd.dev> uses custom datastructures that are stored in a mongodb which inspired this example.
```rust
use std::sync::{Arc, RwLock};
use serde::{Deserialize, Serialize};
use std::collections::HashMap;
use adf_bdd::datatypes::adf::VarContainer;
use adf_bdd::datatypes::{BddNode, Term, Var};
use adf_bdd::obdd::Bdd;
use adf_bdd::parser::AdfParser;
use adf_bdd::adf::Adf;
// Custom Datastructures for (De-)Serialization
# #[derive(PartialEq, Debug)]
#[derive(Deserialize, Serialize)]
struct MyCustomVarContainer {
names: Vec<String>,
mapping: HashMap<String, String>,
}
impl From<VarContainer> for MyCustomVarContainer {
fn from(source: VarContainer) -> Self {
Self {
names: source.names().read().unwrap().clone(),
mapping: source
.mappings()
.read()
.unwrap()
.iter()
.map(|(k, v)| (k.clone(), v.to_string()))
.collect(),
}
}
}
impl From<MyCustomVarContainer> for VarContainer {
fn from(source: MyCustomVarContainer) -> Self {
Self::from_parser(
Arc::new(RwLock::new(source.names)),
Arc::new(RwLock::new(
source
.mapping
.into_iter()
.map(|(k, v)| (k, v.parse().unwrap()))
.collect(),
)),
)
}
}
# #[derive(PartialEq, Debug)]
#[derive(Deserialize, Serialize)]
struct MyCustomBddNode {
var: String,
lo: String,
hi: String,
}
impl From<BddNode> for MyCustomBddNode {
fn from(source: BddNode) -> Self {
Self {
var: source.var().0.to_string(),
lo: source.lo().0.to_string(),
hi: source.hi().0.to_string(),
}
}
}
impl From<MyCustomBddNode> for BddNode {
fn from(source: MyCustomBddNode) -> Self {
Self::new(
Var(source.var.parse().unwrap()),
Term(source.lo.parse().unwrap()),
Term(source.hi.parse().unwrap()),
)
}
}
# #[derive(PartialEq, Debug)]
#[derive(Deserialize, Serialize)]
struct MyCustomAdf {
ordering: MyCustomVarContainer,
bdd: Vec<MyCustomBddNode>,
ac: Vec<String>,
}
impl From<Adf> for MyCustomAdf {
fn from(source: Adf) -> Self {
Self {
ordering: source.ordering.into(),
bdd: source.bdd.nodes.into_iter().map(Into::into).collect(),
ac: source.ac.into_iter().map(|t| t.0.to_string()).collect(),
}
}
}
impl From<MyCustomAdf> for Adf {
fn from(source: MyCustomAdf) -> Self {
let bdd = Bdd::from(source.bdd.into_iter().map(Into::into).collect::<Vec<BddNode>>());
Adf::from((
source.ordering.into(),
bdd,
source
.ac
.into_iter()
.map(|t| Term(t.parse().unwrap()))
.collect(),
))
}
}
// use the above example as input
let input = "s(a).s(b).s(c).s(d).ac(a,c(v)).ac(b,or(a,b)).ac(c,neg(b)).ac(d,d).";
let parser = AdfParser::default();
parser.parse()(&input).unwrap();
// create Adf
let adf = Adf::from_parser(&parser);
// cast into custom struct
let my_custom_adf: MyCustomAdf = adf.into();
// stringify to json
let json: String = serde_json::to_string(&my_custom_adf).unwrap();
// parse json
let parsed_custom_adf: MyCustomAdf = serde_json::from_str(&json).unwrap();
// cast into lib struct that resembles the original Adf
let parsed_adf: Adf = parsed_custom_adf.into();
# let my_custom_adf2: MyCustomAdf = parsed_adf.into();
# assert_eq!(my_custom_adf, my_custom_adf2);
```
*/
#![deny(
missing_debug_implementations,
missing_copy_implementations,
missing_copy_implementations,
trivial_casts,
trivial_numeric_casts,
unsafe_code
@ -180,8 +357,8 @@ for model in adf.complete() {
pub mod adf;
pub mod adfbiodivine;
pub mod datatypes;
pub mod nogoods;
pub mod obdd;
pub mod parser;
#[cfg(test)]
mod test;
//pub mod obdd2;

812
lib/src/nogoods.rs Normal file
View File

@ -0,0 +1,812 @@
//! Collection of all nogood-related structures.
use std::{
fmt::{Debug, Display},
ops::{BitAnd, BitOr, BitXor, BitXorAssign},
};
use crate::datatypes::Term;
use roaring::RoaringBitmap;
/// A [NoGood] and an [Interpretation] can be represented by the same structure.
/// Moreover this duality (i.e. an [Interpretation] becomes a [NoGood] is reflected by this type alias.
pub type Interpretation = NoGood;
/// Representation of a nogood by a pair of [Bitmaps][RoaringBitmap]
#[derive(Debug, Default, Clone)]
pub struct NoGood {
active: RoaringBitmap,
value: RoaringBitmap,
}
impl Eq for NoGood {}
impl PartialEq for NoGood {
fn eq(&self, other: &Self) -> bool {
(&self.active).bitxor(&other.active).is_empty()
&& (&self.value).bitxor(&other.value).is_empty()
}
}
impl NoGood {
/// Creates an [Interpretation] from a given Vector of [Terms][Term].
pub fn from_term_vec(term_vec: &[Term]) -> Interpretation {
let mut result = Self::default();
term_vec.iter().enumerate().for_each(|(idx, val)| {
let idx:u32 = idx.try_into().expect("no-good learner implementation is based on the assumption that only u32::MAX-many variables are in place");
if val.is_truth_value() {
result.active.insert(idx);
if val.is_true() {
result.value.insert(idx);
}
}
});
result
}
/// Creates a [NoGood] representing an atomic assignment.
pub fn new_single_nogood(pos: usize, val: bool) -> NoGood {
let mut result = Self::default();
let pos:u32 = pos.try_into().expect("nog-good learner implementation is based on the assumption that only u32::MAX-many variables are in place");
result.active.insert(pos);
if val {
result.value.insert(pos);
}
result
}
/// Returns [None] if the pair contains inconsistent pairs.
/// Otherwise it returns an [Interpretation] which represents the set values.
pub fn try_from_pair_iter(
pair_iter: &mut impl Iterator<Item = (usize, bool)>,
) -> Option<Interpretation> {
let mut result = Self::default();
let mut visit = false;
for (idx, val) in pair_iter {
visit = true;
let idx:u32 = idx.try_into().expect("no-good learner implementation is based on the assumption that only u32::MAX-many variables are in place");
let is_new = result.active.insert(idx);
let upd = if val {
result.value.insert(idx)
} else {
result.value.remove(idx)
};
// if the state is not new and the value is changed
if !is_new && upd {
return None;
}
}
visit.then_some(result)
}
/// Creates an updated [`Vec<Term>`], based on the given [&[Term]] and the [NoGood].
/// The parameter _update_ is set to [`true`] if there has been an update and to [`false`] otherwise
pub fn update_term_vec(&self, term_vec: &[Term], update: &mut bool) -> Vec<Term> {
*update = false;
term_vec
.iter()
.enumerate()
.map(|(idx, val)| {
let idx: u32 = idx.try_into().expect(
"no-good learner implementation is based on the assumption \
that only u32::MAX-many variables are in place",
);
if self.active.contains(idx) {
if !val.is_truth_value() {
*update = true;
}
if self.value.contains(idx) {
Term::TOP
} else {
Term::BOT
}
} else {
*val
}
})
.collect()
}
/// Given a [NoGood] and another one, conclude a non-conflicting value which can be concluded on basis of the given one.
pub fn conclude(&self, other: &NoGood) -> Option<(usize, bool)> {
log::debug!("conclude: {:?} other {:?}", self, other);
let implication = (&self.active).bitxor(&other.active).bitand(&self.active);
let bothactive = (&self.active).bitand(&other.active);
let mut no_matches = (&bothactive).bitand(&other.value);
no_matches.bitxor_assign(bothactive.bitand(&self.value));
if implication.len() == 1 && no_matches.is_empty() {
let pos = implication
.min()
.expect("just checked that there is one element to be found");
log::trace!(
"Conclude {:?}",
Some((pos as usize, !self.value.contains(pos)))
);
Some((pos as usize, !self.value.contains(pos)))
} else {
log::trace!("Nothing to Conclude");
None
}
}
/// Updates the [NoGood] and a second one in a disjunctive (bitor) manner.
pub fn disjunction(&mut self, other: &NoGood) {
self.active = (&self.active).bitor(&other.active);
self.value = (&self.value).bitor(&other.value);
}
/// Returns [true] if the other [Interpretation] matches with all the assignments of the current [NoGood].
pub fn is_violating(&self, other: &Interpretation) -> bool {
let active = (&self.active).bitand(&other.active);
if self.active.len() == active.len() {
let lhs = (&active).bitand(&self.value);
let rhs = (&active).bitand(&other.value);
if lhs.bitxor(rhs).is_empty() {
return true;
}
}
false
}
/// Returns the number of set (i.e. active) bits.
pub fn len(&self) -> usize {
self.active
.len()
.try_into()
.expect("expecting to be on a 64 bit system")
}
#[must_use]
/// Returns [true] if the [NoGood] does not set any value.
pub fn is_empty(&self) -> bool {
self.len() == 0
}
}
impl From<&[Term]> for NoGood {
fn from(term_vec: &[Term]) -> Self {
Self::from_term_vec(term_vec)
}
}
/// A structure to store [NoGoods][NoGood] and offer operations and deductions based on them.
#[derive(Debug)]
pub struct NoGoodStore {
store: Vec<Vec<NoGood>>,
duplicates: DuplicateElemination,
}
impl Display for NoGoodStore {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
writeln!(f, "NoGoodStats: [")?;
for (arity, vec) in self.store.iter().enumerate() {
writeln!(f, "{arity}: {}", vec.len())?;
log::debug!("Nogoods:\n {:?}", vec);
}
write!(f, "]")
}
}
impl NoGoodStore {
/// Creates a new [NoGoodStore] and assumes a size compatible with the underlying [NoGood] implementation.
pub fn new(size: u32) -> NoGoodStore {
Self {
store: vec![Vec::new(); size as usize],
duplicates: DuplicateElemination::Equiv,
}
}
/// Tries to create a new [NoGoodStore].
/// Does not succeed if the size is too big for the underlying [NoGood] implementation.
pub fn try_new(size: usize) -> Option<NoGoodStore> {
Some(Self::new(size.try_into().ok()?))
}
/// Sets the behaviour when managing duplicates.
pub fn set_dup_elem(&mut self, mode: DuplicateElemination) {
self.duplicates = mode;
}
/// Adds a given [NoGood]
pub fn add_ng(&mut self, nogood: NoGood) {
let mut idx = nogood.len();
if idx > 0 {
idx -= 1;
if match self.duplicates {
DuplicateElemination::None => true,
DuplicateElemination::Equiv => !self.store[idx].contains(&nogood),
DuplicateElemination::Subsume => {
self.store
.iter_mut()
.enumerate()
.for_each(|(cur_idx, ng_vec)| {
if idx >= cur_idx {
ng_vec.retain(|ng| !ng.is_violating(&nogood));
}
});
true
}
} {
self.store[idx].push(nogood);
}
}
}
/// Draws a (Conclusion)[NoGood], based on the [NoGoodStore] and the given [NoGood].
/// *Returns* [None] if there is a conflict
pub fn conclusions(&self, nogood: &NoGood) -> Option<NoGood> {
let mut result = nogood.clone();
log::trace!("ng-store: {:?}", self.store);
self.store
.iter()
.enumerate()
.filter(|(len, _vec)| *len <= nogood.len())
.filter_map(|(_len, val)| {
NoGood::try_from_pair_iter(&mut val.iter().filter_map(|ng| ng.conclude(nogood)))
})
.try_fold(&mut result, |acc, ng| {
if ng.is_violating(acc) {
log::trace!("ng conclusion violating");
None
} else {
acc.disjunction(&ng);
Some(acc)
}
})?;
if self
.store
.iter()
.enumerate()
.filter(|(len, _vec)| *len <= nogood.len())
.any(|(_, vec)| {
vec.iter()
.any(|elem| elem.is_violating(&result) || elem.is_violating(nogood))
})
{
return None;
}
Some(result)
}
/// Constructs the Closure of the conclusions drawn by the nogoods with respect to the given `interpretation`
pub(crate) fn conclusion_closure(&self, interpretation: &[Term]) -> ClosureResult {
let mut update = true;
let mut result = match self.conclusions(&interpretation.into()) {
Some(val) => {
log::trace!(
"conclusion-closure step 1: val:{:?} -> {:?}",
val,
val.update_term_vec(interpretation, &mut update)
);
val.update_term_vec(interpretation, &mut update)
}
None => return ClosureResult::Inconsistent,
};
if !update {
return ClosureResult::NoUpdate;
}
while update {
match self.conclusions(&result.as_slice().into()) {
Some(val) => result = val.update_term_vec(&result, &mut update),
None => return ClosureResult::Inconsistent,
}
}
ClosureResult::Update(result)
}
}
/// Allows to define how costly the DuplicateElemination is done.
#[derive(Debug, Copy, Clone)]
pub enum DuplicateElemination {
/// No Duplicate Detection
None,
/// Only check weak equivalence
Equiv,
/// Check for subsumptions
Subsume,
}
/// If the closure had some issues, it is represented with this enum
#[derive(Debug, PartialEq, Eq)]
pub(crate) enum ClosureResult {
Update(Vec<Term>),
NoUpdate,
Inconsistent,
}
impl ClosureResult {
/// Dead_code due to (currently) unused utility function for the [ClosureResult] enum.
#[allow(dead_code)]
pub fn is_update(&self) -> bool {
matches!(self, Self::Update(_))
}
/// Dead_code due to (currently) unused utility function for the [ClosureResult] enum.
#[allow(dead_code)]
pub fn is_no_update(&self) -> bool {
matches!(self, Self::NoUpdate)
}
/// Dead_code due to (currently) unused utility function for the [ClosureResult] enum.
#[allow(dead_code)]
pub fn is_inconsistent(&self) -> bool {
matches!(self, Self::Inconsistent)
}
}
impl TryInto<Vec<Term>> for ClosureResult {
type Error = &'static str;
fn try_into(self) -> Result<Vec<Term>, Self::Error> {
match self {
ClosureResult::Update(val) => Ok(val),
ClosureResult::NoUpdate => Err("No update occurred, use the old value instead"),
ClosureResult::Inconsistent => Err("Inconsistency occurred"),
}
}
}
#[cfg(test)]
mod test {
use super::*;
use test_log::test;
#[test]
fn create_ng() {
let terms = vec![Term::TOP, Term(22), Term(13232), Term::BOT, Term::TOP];
let ng = NoGood::from_term_vec(&terms);
assert_eq!(ng.active.len(), 3);
assert_eq!(ng.value.len(), 2);
assert!(ng.active.contains(0));
assert!(!ng.active.contains(1));
assert!(!ng.active.contains(2));
assert!(ng.active.contains(3));
assert!(ng.active.contains(4));
assert!(ng.value.contains(0));
assert!(!ng.value.contains(1));
assert!(!ng.value.contains(2));
assert!(!ng.value.contains(3));
assert!(ng.value.contains(4));
}
#[test]
fn conclude() {
let ng1 = NoGood::from_term_vec(&[Term::TOP, Term(22), Term::TOP, Term::BOT, Term::TOP]);
let ng2 = NoGood::from_term_vec(&[Term::TOP, Term(22), Term(13232), Term::BOT, Term::TOP]);
let ng3 = NoGood::from_term_vec(&[
Term::TOP,
Term(22),
Term(13232),
Term::BOT,
Term::TOP,
Term::BOT,
]);
assert_eq!(ng1.conclude(&ng2), Some((2, false)));
assert_eq!(ng1.conclude(&ng1), None);
assert_eq!(ng2.conclude(&ng1), None);
assert_eq!(ng1.conclude(&ng3), Some((2, false)));
assert_eq!(ng3.conclude(&ng1), Some((5, true)));
assert_eq!(ng3.conclude(&ng2), Some((5, true)));
// conclusions on empty knowledge
let ng4 = NoGood::from_term_vec(&[Term::TOP]);
let ng5 = NoGood::from_term_vec(&[Term::BOT]);
let ng6 = NoGood::from_term_vec(&[]);
assert_eq!(ng4.conclude(&ng6), Some((0, false)));
assert_eq!(ng5.conclude(&ng6), Some((0, true)));
assert_eq!(ng6.conclude(&ng5), None);
assert_eq!(ng4.conclude(&ng5), None);
let ng_a = NoGood::from_term_vec(&[Term::BOT, Term(22)]);
let ng_b = NoGood::from_term_vec(&[Term(22), Term::TOP]);
assert_eq!(ng_a.conclude(&ng_b), Some((0, true)));
}
#[test]
fn violate() {
let ng1 = NoGood::from_term_vec(&[Term::TOP, Term(22), Term::TOP, Term::BOT, Term::TOP]);
let ng2 = NoGood::from_term_vec(&[Term::TOP, Term(22), Term(13232), Term::BOT, Term::TOP]);
let ng3 = NoGood::from_term_vec(&[
Term::TOP,
Term(22),
Term(13232),
Term::BOT,
Term::TOP,
Term::BOT,
]);
let ng4 = NoGood::from_term_vec(&[Term::TOP]);
assert!(ng4.is_violating(&ng1));
assert!(!ng1.is_violating(&ng4));
assert!(ng2.is_violating(&ng3));
assert!(!ng3.is_violating(&ng2));
assert_eq!(ng4, NoGood::new_single_nogood(0, true));
}
#[test]
fn add_ng() {
let mut ngs = NoGoodStore::new(5);
let ng1 = NoGood::from_term_vec(&[Term::TOP]);
let ng2 = NoGood::from_term_vec(&[Term(22), Term::TOP]);
let ng3 = NoGood::from_term_vec(&[Term(22), Term(22), Term::TOP]);
let ng4 = NoGood::from_term_vec(&[Term(22), Term(22), Term(22), Term::TOP]);
let ng5 = NoGood::from_term_vec(&[Term::BOT]);
assert!(!ng1.is_violating(&ng5));
assert!(ng1.is_violating(&ng1));
ngs.add_ng(ng1.clone());
ngs.add_ng(ng2.clone());
ngs.add_ng(ng3.clone());
ngs.add_ng(ng4.clone());
ngs.add_ng(ng5.clone());
assert_eq!(
ngs.store
.iter()
.fold(0, |acc, ng_vec| { acc + ng_vec.len() }),
5
);
ngs.set_dup_elem(DuplicateElemination::Equiv);
ngs.add_ng(ng1.clone());
ngs.add_ng(ng2.clone());
ngs.add_ng(ng3.clone());
ngs.add_ng(ng4.clone());
ngs.add_ng(ng5.clone());
assert_eq!(
ngs.store
.iter()
.fold(0, |acc, ng_vec| { acc + ng_vec.len() }),
5
);
ngs.set_dup_elem(DuplicateElemination::Subsume);
ngs.add_ng(ng1);
ngs.add_ng(ng2);
ngs.add_ng(ng3);
ngs.add_ng(ng4);
ngs.add_ng(ng5);
assert_eq!(
ngs.store
.iter()
.fold(0, |acc, ng_vec| { acc + ng_vec.len() }),
5
);
ngs.add_ng(NoGood::from_term_vec(&[Term(22), Term::BOT, Term(22)]));
assert_eq!(
ngs.store
.iter()
.fold(0, |acc, ng_vec| { acc + ng_vec.len() }),
6
);
ngs.add_ng(NoGood::from_term_vec(&[Term(22), Term::BOT, Term::BOT]));
assert_eq!(
ngs.store
.iter()
.fold(0, |acc, ng_vec| { acc + ng_vec.len() }),
6
);
assert!(NoGood::from_term_vec(&[Term(22), Term::BOT, Term(22)])
.is_violating(&NoGood::from_term_vec(&[Term(22), Term::BOT, Term::BOT])));
}
#[test]
fn ng_store_conclusions() {
let mut ngs = NoGoodStore::new(5);
let ng1 = NoGood::from_term_vec(&[Term::BOT]);
ngs.add_ng(ng1.clone());
assert_eq!(ng1.conclude(&ng1), None);
assert_eq!(
ng1.conclude(&NoGood::from_term_vec(&[Term(33)])),
Some((0, true))
);
assert_eq!(ngs.conclusions(&ng1), None);
assert_ne!(ngs.conclusions(&NoGood::from_term_vec(&[Term(33)])), None);
assert_eq!(
ngs.conclusions(&NoGood::from_term_vec(&[Term(33)]))
.expect("just checked with prev assertion")
.update_term_vec(&[Term(33)], &mut false),
vec![Term::TOP]
);
let ng2 = NoGood::from_term_vec(&[Term(123), Term::TOP, Term(234), Term(345)]);
let ng3 = NoGood::from_term_vec(&[Term::TOP, Term::BOT, Term::TOP, Term(345)]);
ngs.add_ng(ng2);
ngs.add_ng(ng3);
log::debug!("issues start here");
assert!(ngs
.conclusions(&NoGood::from_term_vec(&[Term::TOP]))
.is_some());
assert_eq!(
ngs.conclusions(&[Term::TOP].as_slice().into())
.expect("just checked with prev assertion")
.update_term_vec(&[Term::TOP, Term(4), Term(5), Term(6), Term(7)], &mut false),
vec![Term::TOP, Term::BOT, Term(5), Term(6), Term(7)]
);
assert!(ngs
.conclusions(&NoGood::from_term_vec(&[
Term::TOP,
Term::BOT,
Term(5),
Term(6),
Term(7)
]))
.is_some());
ngs = NoGoodStore::new(10);
ngs.add_ng([Term::BOT].as_slice().into());
ngs.add_ng(
[Term::TOP, Term::BOT, Term(33), Term::TOP]
.as_slice()
.into(),
);
ngs.add_ng(
[Term::TOP, Term::BOT, Term(33), Term(33), Term::BOT]
.as_slice()
.into(),
);
ngs.add_ng([Term::TOP, Term::TOP].as_slice().into());
let interpr: Vec<Term> = vec![
Term(123),
Term(233),
Term(345),
Term(456),
Term(567),
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000),
];
let concl = ngs.conclusions(&interpr.as_slice().into());
assert_eq!(concl, Some(NoGood::from_term_vec(&[Term::TOP])));
let mut update = false;
let new_interpr = concl
.expect("just tested in assert")
.update_term_vec(&interpr, &mut update);
assert_eq!(
new_interpr,
vec![
Term::TOP,
Term(233),
Term(345),
Term(456),
Term(567),
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000)
]
);
assert!(update);
let new_int_2 = ngs
.conclusions(&new_interpr.as_slice().into())
.map(|val| val.update_term_vec(&new_interpr, &mut update))
.expect("Should return a value");
assert_eq!(
new_int_2,
vec![
Term::TOP,
Term::BOT,
Term(345),
Term(456),
Term(567),
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000)
]
);
assert!(update);
let new_int_3 = ngs
.conclusions(&new_int_2.as_slice().into())
.map(|val| val.update_term_vec(&new_int_2, &mut update))
.expect("Should return a value");
assert_eq!(
new_int_3,
vec![
Term::TOP,
Term::BOT,
Term(345),
Term::BOT,
Term::TOP,
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000)
]
);
assert!(update);
let concl4 = ngs.conclusions(&new_int_3.as_slice().into());
assert_ne!(concl4, None);
let new_int_4 = ngs
.conclusions(&new_int_3.as_slice().into())
.map(|val| val.update_term_vec(&new_int_3, &mut update))
.expect("Should return a value");
assert_eq!(
new_int_4,
vec![
Term::TOP,
Term::BOT,
Term(345),
Term::BOT,
Term::TOP,
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000)
]
);
assert!(!update);
// inconsistence
let interpr = vec![
Term::TOP,
Term::TOP,
Term::BOT,
Term::BOT,
Term(111),
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000),
];
assert_eq!(ngs.conclusions(&interpr.as_slice().into()), None);
ngs = NoGoodStore::new(6);
ngs.add_ng(
[Term(1), Term(1), Term(1), Term(0), Term(0), Term(1)]
.as_slice()
.into(),
);
ngs.add_ng(
[Term(1), Term(1), Term(8), Term(0), Term(0), Term(11)]
.as_slice()
.into(),
);
ngs.add_ng([Term(22), Term(1)].as_slice().into());
assert_eq!(
ngs.conclusions(
&[Term(1), Term(3), Term(3), Term(9), Term(0), Term(1)]
.as_slice()
.into(),
),
Some(NoGood::from_term_vec(&[
Term(1),
Term(0),
Term(3),
Term(9),
Term(0),
Term(1)
]))
);
}
#[test]
fn conclusion_closure() {
let mut ngs = NoGoodStore::new(10);
ngs.add_ng([Term::BOT].as_slice().into());
ngs.add_ng(
[Term::TOP, Term::BOT, Term(33), Term::TOP]
.as_slice()
.into(),
);
ngs.add_ng(
[Term::TOP, Term::BOT, Term(33), Term(33), Term::BOT]
.as_slice()
.into(),
);
ngs.add_ng([Term::TOP, Term::TOP].as_slice().into());
let interpr: Vec<Term> = vec![
Term(123),
Term(233),
Term(345),
Term(456),
Term(567),
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000),
];
let result = ngs.conclusion_closure(&interpr);
assert!(result.is_update());
let resultint: Vec<Term> = result.try_into().expect("just checked conversion");
assert_eq!(
resultint,
vec![
Term::TOP,
Term::BOT,
Term(345),
Term::BOT,
Term::TOP,
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000)
]
);
let result_no_upd = ngs.conclusion_closure(&resultint);
assert!(result_no_upd.is_no_update());
assert_eq!(
<ClosureResult as TryInto<Vec<Term>>>::try_into(result_no_upd)
.expect_err("just checked that it is an error"),
"No update occurred, use the old value instead"
);
let inconsistent_interpr = vec![
Term::TOP,
Term::TOP,
Term::BOT,
Term::BOT,
Term(111),
Term(678),
Term(789),
Term(899),
Term(999),
Term(1000),
];
let result_inconsistent = ngs.conclusion_closure(&inconsistent_interpr);
assert!(result_inconsistent.is_inconsistent());
assert_eq!(
<ClosureResult as TryInto<Vec<Term>>>::try_into(result_inconsistent)
.expect_err("just checked that it is an error"),
"Inconsistency occurred"
);
ngs = NoGoodStore::new(6);
ngs.add_ng(
[Term(1), Term(1), Term(1), Term(0), Term(0), Term(1)]
.as_slice()
.into(),
);
ngs.add_ng(
[Term(1), Term(1), Term(8), Term(0), Term(0), Term(11)]
.as_slice()
.into(),
);
ngs.add_ng([Term(22), Term(1)].as_slice().into());
assert_eq!(
ngs.conclusion_closure(&[Term(1), Term(3), Term(3), Term(9), Term(0), Term(1)]),
ClosureResult::Update(vec![Term(1), Term(0), Term(3), Term(9), Term(0), Term(1)])
);
}
}

View File

@ -1,5 +1,7 @@
//! Module which represents obdds.
//!
#[cfg(feature = "frontend")]
pub mod frontend;
pub mod vectorize;
use crate::datatypes::*;
use serde::{Deserialize, Serialize};
@ -11,7 +13,8 @@ use std::{cell::RefCell, cmp::min, collections::HashMap, fmt::Display};
/// Each roBDD is identified by its corresponding [`Term`], which implicitly identifies the root node of a roBDD.
#[derive(Debug, Serialize, Deserialize)]
pub struct Bdd {
pub(crate) nodes: Vec<BddNode>,
/// The nodes of the [`Bdd`] with their edges
pub nodes: Vec<BddNode>,
#[cfg(feature = "variablelist")]
#[serde(skip)]
var_deps: Vec<HashSet<Var>>,
@ -19,6 +22,16 @@ pub struct Bdd {
cache: HashMap<BddNode, Term>,
#[serde(skip, default = "Bdd::default_count_cache")]
count_cache: RefCell<HashMap<Term, CountNode>>,
#[cfg(feature = "frontend")]
#[serde(skip)]
sender: Option<crossbeam_channel::Sender<BddNode>>,
#[cfg(feature = "frontend")]
#[serde(skip)]
receiver: Option<crossbeam_channel::Receiver<BddNode>>,
#[serde(skip)]
ite_cache: HashMap<(Term, Term, Term), Term>,
#[serde(skip)]
restrict_cache: HashMap<(Term, Var, bool), Term>,
}
impl Display for Bdd {
@ -37,8 +50,20 @@ impl Default for Bdd {
}
}
impl From<Vec<BddNode>> for Bdd {
fn from(nodes: Vec<BddNode>) -> Self {
let mut bdd = Self::new();
for node in nodes {
bdd.node(node.var(), node.lo(), node.hi());
}
bdd
}
}
impl Bdd {
/// Instantiate a new roBDD structures.
/// Instantiate a new roBDD structure.
/// Constants for the [``][crate::datatypes::Term::TOP] and [`⊥`][crate::datatypes::Term::BOT] concepts are prepared in that step too.
pub fn new() -> Self {
#[cfg(not(feature = "adhoccounting"))]
@ -49,6 +74,12 @@ impl Bdd {
var_deps: vec![HashSet::new(), HashSet::new()],
cache: HashMap::new(),
count_cache: RefCell::new(HashMap::new()),
#[cfg(feature = "frontend")]
sender: None,
#[cfg(feature = "frontend")]
receiver: None,
ite_cache: HashMap::new(),
restrict_cache: HashMap::new(),
}
}
#[cfg(feature = "adhoccounting")]
@ -59,6 +90,12 @@ impl Bdd {
var_deps: vec![HashSet::new(), HashSet::new()],
cache: HashMap::new(),
count_cache: RefCell::new(HashMap::new()),
#[cfg(feature = "frontend")]
sender: None,
#[cfg(feature = "frontend")]
receiver: None,
ite_cache: HashMap::new(),
restrict_cache: HashMap::new(),
};
result
.count_cache
@ -176,6 +213,9 @@ impl Bdd {
/// Restrict the value of a given [variable][crate::datatypes::Var] to **val**.
pub fn restrict(&mut self, tree: Term, var: Var, val: bool) -> Term {
if let Some(result) = self.restrict_cache.get(&(tree, var, val)) {
*result
} else {
let node = self.nodes[tree.0];
#[cfg(feature = "variablelist")]
{
@ -190,12 +230,19 @@ impl Bdd {
} else if node.var() < var {
let lonode = self.restrict(node.lo(), var, val);
let hinode = self.restrict(node.hi(), var, val);
self.node(node.var(), lonode, hinode)
let result = self.node(node.var(), lonode, hinode);
self.restrict_cache.insert((tree, var, val), result);
result
} else {
if val {
self.restrict(node.hi(), var, val)
let result = self.restrict(node.hi(), var, val);
self.restrict_cache.insert((tree, var, val), result);
result
} else {
self.restrict(node.lo(), var, val)
let result = self.restrict(node.lo(), var, val);
self.restrict_cache.insert((tree, var, val), result);
result
}
}
}
}
@ -210,7 +257,10 @@ impl Bdd {
t
} else if t == Term::TOP && e == Term::BOT {
i
} else if let Some(result) = self.ite_cache.get(&(i, t, e)) {
*result
} else {
log::trace!("if_then_else: i {i} t {t} e {e} not found");
let minvar = Var(min(
self.nodes[i.value()].var().value(),
min(
@ -227,7 +277,9 @@ impl Bdd {
let top_ite = self.if_then_else(itop, ttop, etop);
let bot_ite = self.if_then_else(ibot, tbot, ebot);
self.node(minvar, bot_ite, top_ite)
let result = self.node(minvar, bot_ite, top_ite);
self.ite_cache.insert((i, t, e), result);
result
}
}
@ -244,6 +296,15 @@ impl Bdd {
let new_term = Term(self.nodes.len());
self.nodes.push(node);
self.cache.insert(node, new_term);
#[cfg(feature = "frontend")]
if let Some(send) = &self.sender {
match send.send(node) {
Ok(_) => log::trace!("Sent {node} to the channel."),
Err(e) => {
log::error!("Error {e} occurred when sending {node} to {:?}", send)
}
}
}
#[cfg(feature = "variablelist")]
{
let mut var_set: HashSet<Var> = self.var_deps[lo.value()]
@ -253,7 +314,7 @@ impl Bdd {
var_set.insert(var);
self.var_deps.push(var_set);
}
log::debug!("newterm: {} as {:?}", new_term, node);
log::trace!("newterm: {} as {:?}", new_term, node);
#[cfg(feature = "adhoccounting")]
{
let mut count_cache = self.count_cache.borrow_mut();
@ -277,11 +338,14 @@ impl Bdd {
hi_paths.models,
hidepth
);
#[cfg(feature = "adhoccountmodels")]
let (lo_exp, hi_exp) = if lodepth > hidepth {
(1, 2usize.pow((lodepth - hidepth) as u32))
} else {
(2usize.pow((hidepth - lodepth) as u32), 1)
};
#[cfg(not(feature = "adhoccountmodels"))]
let (lo_exp, hi_exp) = (0, 0);
log::debug!("lo_exp {}, hi_exp {}", lo_exp, hi_exp);
count_cache.insert(
new_term,
@ -310,11 +374,11 @@ impl Bdd {
///
/// Use the flag `_memoization` to choose between using the memoization approach or not. (This flag does nothing, if the feature `adhoccounting` is used)
pub fn models(&self, term: Term, _memoization: bool) -> ModelCounts {
#[cfg(feature = "adhoccounting")]
#[cfg(feature = "adhoccountmodels")]
{
return self.count_cache.borrow().get(&term).expect("The term should be originating from this bdd, otherwise the result would be inconsistent anyways").0;
}
#[cfg(not(feature = "adhoccounting"))]
#[cfg(not(feature = "adhoccountmodels"))]
if _memoization {
self.modelcount_memoization(term).0
} else {
@ -505,7 +569,7 @@ impl Bdd {
/// Counts how often another roBDD uses a [variable][crate::datatypes::Var], which occurs in this roBDD.
pub fn active_var_impact(&self, var: Var, termlist: &[Term]) -> usize {
(0..termlist.len()).into_iter().fold(0usize, |acc, idx| {
(0..termlist.len()).fold(0usize, |acc, idx| {
if self
.var_dependencies(termlist[var.value()])
.contains(&Var(idx))
@ -626,7 +690,7 @@ mod test {
let a1 = bdd.and(v1, v2);
let _a2 = bdd.or(a1, v3);
assert_eq!(format!("{}", bdd), " \n0 BddNode: Var(18446744073709551614), lo: Term(0), hi: Term(0)\n1 BddNode: Var(18446744073709551615), lo: Term(1), hi: Term(1)\n2 BddNode: Var(0), lo: Term(0), hi: Term(1)\n3 BddNode: Var(1), lo: Term(0), hi: Term(1)\n4 BddNode: Var(2), lo: Term(0), hi: Term(1)\n5 BddNode: Var(0), lo: Term(0), hi: Term(3)\n6 BddNode: Var(1), lo: Term(4), hi: Term(1)\n7 BddNode: Var(0), lo: Term(4), hi: Term(6)\n");
assert_eq!(format!("{bdd}"), " \n0 BddNode: Var(18446744073709551614), lo: Term(0), hi: Term(0)\n1 BddNode: Var(18446744073709551615), lo: Term(1), hi: Term(1)\n2 BddNode: Var(0), lo: Term(0), hi: Term(1)\n3 BddNode: Var(1), lo: Term(0), hi: Term(1)\n4 BddNode: Var(2), lo: Term(0), hi: Term(1)\n5 BddNode: Var(0), lo: Term(0), hi: Term(3)\n6 BddNode: Var(1), lo: Term(4), hi: Term(1)\n7 BddNode: Var(0), lo: Term(4), hi: Term(6)\n");
}
#[test]
@ -642,6 +706,7 @@ mod test {
let formula3 = bdd.xor(v1, v2);
let formula4 = bdd.and(v3, formula2);
#[cfg(feature = "adhoccountmodels")]
assert_eq!(bdd.models(v1, false), (1, 1).into());
let mut x = bdd.count_cache.get_mut().iter().collect::<Vec<_>>();
x.sort();
@ -650,6 +715,8 @@ mod test {
log::debug!("{:?}", x);
}
log::debug!("{:?}", x);
#[cfg(feature = "adhoccountmodels")]
{
assert_eq!(bdd.models(formula1, false), (3, 1).into());
assert_eq!(bdd.models(formula2, false), (1, 3).into());
assert_eq!(bdd.models(formula3, false), (2, 2).into());
@ -664,6 +731,7 @@ mod test {
assert_eq!(bdd.models(formula4, true), (5, 3).into());
assert_eq!(bdd.models(Term::TOP, true), (0, 1).into());
assert_eq!(bdd.models(Term::BOT, true), (1, 0).into());
}
assert_eq!(bdd.paths(formula1, false), (2, 1).into());
assert_eq!(bdd.paths(formula2, false), (1, 2).into());
@ -706,6 +774,8 @@ mod test {
((1, 0).into(), (1, 0).into(), 0)
);
#[cfg(feature = "adhoccountmodels")]
{
assert_eq!(
bdd.modelcount_naive(formula4),
bdd.modelcount_memoization(formula4)
@ -732,6 +802,7 @@ mod test {
bdd.modelcount_naive(Term::BOT),
bdd.modelcount_memoization(Term::BOT)
);
}
assert_eq!(bdd.max_depth(Term::BOT), 0);
assert_eq!(bdd.max_depth(v1), 1);

267
lib/src/obdd/frontend.rs Normal file
View File

@ -0,0 +1,267 @@
//! Implementation of frontend-feature related methods and functions
//! See the Structs in the [obdd-module][super] for most of the implementations
use crate::datatypes::Term;
use super::BddNode;
impl super::Bdd {
/// Instantiate a new [roBDD][super::Bdd] structure.
/// Constants for the [``][crate::datatypes::Term::TOP] and [`⊥`][crate::datatypes::Term::BOT] concepts are prepared in that step too.
/// # Attention
/// Constants for [``][crate::datatypes::Term::TOP] and [`⊥`][crate::datatypes::Term::BOT] concepts are not sent, as they are considered to be existing in every [Bdd][super::Bdd] structure.
pub fn with_sender(sender: crossbeam_channel::Sender<BddNode>) -> Self {
// TODO nicer handling of the initialisation though overhead is not an issue here
let mut result = Self::new();
result.set_sender(sender);
result
}
/// Instantiate a new [roBDD][super::Bdd] structure.
/// Constants for the [``][crate::datatypes::Term::TOP] and [`⊥`][crate::datatypes::Term::BOT] concepts are prepared in that step too.
/// # Attention
/// Note that mixing manipulating operations and utilising the communication channel for a receiving [roBDD][super::Bdd] may end up in inconsistent data.
/// So far, only manipulate the [roBDD][super::Bdd] if no further [recv][Self::recv] will be called.
pub fn with_receiver(receiver: crossbeam_channel::Receiver<BddNode>) -> Self {
// TODO nicer handling of the initialisation though overhead is not an issue here
let mut result = Self::new();
result.set_receiver(receiver);
result
}
/// Instantiate a new [roBDD][super::Bdd] structure.
/// Constants for the [``][crate::datatypes::Term::TOP] and [`⊥`][crate::datatypes::Term::BOT] concepts are prepared in that step too.
/// # Attention
/// - Constants for [``][crate::datatypes::Term::TOP] and [`⊥`][crate::datatypes::Term::BOT] concepts are not sent, as they are considered to be existing in every [Bdd][super::Bdd] structure.
/// - Mixing manipulating operations and utilising the communication channel for a receiving [roBDD][super::Bdd] may end up in inconsistent data.
///
/// So far, only manipulate the [roBDD][super::Bdd] if no further [recv][Self::recv] will be called.
pub fn with_sender_receiver(
sender: crossbeam_channel::Sender<BddNode>,
receiver: crossbeam_channel::Receiver<BddNode>,
) -> Self {
let mut result = Self::new();
result.set_receiver(receiver);
result.set_sender(sender);
result
}
/// Updates the currently used [sender][crossbeam_channel::Sender]
pub fn set_sender(&mut self, sender: crossbeam_channel::Sender<BddNode>) {
self.sender = Some(sender);
}
/// Updates the currently used [receiver][crossbeam_channel::Receiver]
pub fn set_receiver(&mut self, receiver: crossbeam_channel::Receiver<BddNode>) {
self.receiver = Some(receiver);
}
/// Receives all information till the looked for [`Term`][crate::datatypes::Term] is either found or all data is read.
/// Note that the values are read, consumed, and added to the [Bdd][super::Bdd].
/// # Returns
/// - [`true`] if the [term][crate::datatypes::Term] is found (either in the [Bdd][super::Bdd] or in the channel.
/// - [`false`] if neither the [Bdd][super::Bdd] nor the channel contains the [term][crate::datatypes::Term].
pub fn recv(&mut self, term: Term) -> bool {
if term.value() < self.nodes.len() {
true
} else if let Some(recv) = &self.receiver {
loop {
match recv.try_recv() {
Ok(node) => {
let new_term = Term(self.nodes.len());
self.nodes.push(node);
self.cache.insert(node, new_term);
if let Some(send) = &self.sender {
match send.send(node) {
Ok(_) => log::trace!("Sent {node} to the channel."),
Err(e) => {
log::error!(
"Error {e} occurred when sending {node} to {:?}",
send
)
}
}
}
if new_term == term {
return true;
}
}
Err(_) => return false,
}
}
} else {
false
}
}
}
#[cfg(test)]
mod test {
use super::super::*;
#[test]
fn get_bdd_updates() {
let (send, recv) = crossbeam_channel::unbounded();
let mut bdd = Bdd::with_sender(send);
let solving = std::thread::spawn(move || {
let v1 = bdd.variable(Var(0));
let v2 = bdd.variable(Var(1));
assert_eq!(v1, Term(2));
assert_eq!(v2, Term(3));
let t1 = bdd.and(v1, v2);
let nt1 = bdd.not(t1);
let ft = bdd.or(v1, nt1);
assert_eq!(ft, Term::TOP);
let v3 = bdd.variable(Var(2));
let nv3 = bdd.not(v3);
assert_eq!(bdd.and(v3, nv3), Term::BOT);
let conj = bdd.and(v1, v2);
assert_eq!(bdd.restrict(conj, Var(0), false), Term::BOT);
assert_eq!(bdd.restrict(conj, Var(0), true), v2);
let a = bdd.and(v3, v2);
let b = bdd.or(v2, v1);
let con1 = bdd.and(a, conj);
let end = bdd.or(con1, b);
log::debug!("Restrict test: restrict({},{},false)", end, Var(1));
let x = bdd.restrict(end, Var(1), false);
assert_eq!(x, Term(2));
});
let updates: Vec<BddNode> = recv.iter().collect();
assert_eq!(
updates,
vec![
BddNode::new(Var(0), Term(0), Term(1)),
BddNode::new(Var(1), Term(0), Term(1)),
BddNode::new(Var(0), Term(0), Term(3)),
BddNode::new(Var(1), Term(1), Term(0)),
BddNode::new(Var(0), Term(1), Term(5)),
BddNode::new(Var(2), Term(0), Term(1)),
BddNode::new(Var(2), Term(1), Term(0)),
BddNode::new(Var(1), Term(0), Term(7)),
BddNode::new(Var(0), Term(3), Term(1)),
BddNode::new(Var(0), Term(0), Term(9)),
]
);
solving.join().expect("Both threads should terminate");
}
#[test]
fn recv_send() {
let (send1, recv1) = crossbeam_channel::unbounded();
let (send2, recv2) = crossbeam_channel::unbounded();
let mut bdd1 = Bdd::with_sender(send1);
let mut bddm = Bdd::with_sender_receiver(send2, recv1);
let mut bddl = Bdd::with_receiver(recv2);
let solving = std::thread::spawn(move || {
let v1 = bdd1.variable(Var(0));
let v2 = bdd1.variable(Var(1));
assert_eq!(v1, Term(2));
assert_eq!(v2, Term(3));
let t1 = bdd1.and(v1, v2);
let nt1 = bdd1.not(t1);
let ft = bdd1.or(v1, nt1);
assert_eq!(ft, Term::TOP);
let v3 = bdd1.variable(Var(2));
let nv3 = bdd1.not(v3);
assert_eq!(bdd1.and(v3, nv3), Term::BOT);
let conj = bdd1.and(v1, v2);
assert_eq!(bdd1.restrict(conj, Var(0), false), Term::BOT);
assert_eq!(bdd1.restrict(conj, Var(0), true), v2);
let a = bdd1.and(v3, v2);
let b = bdd1.or(v2, v1);
let con1 = bdd1.and(a, conj);
let end = bdd1.or(con1, b);
log::debug!("Restrict test: restrict({},{},false)", end, Var(1));
let x = bdd1.restrict(end, Var(1), false);
assert_eq!(x, Term(2));
});
// allow the worker to fill the channels
std::thread::sleep(std::time::Duration::from_millis(10));
// both are initialised, no updates so far
assert_eq!(bddm.nodes, bddl.nodes);
// receiving a truth constant should work without changing the bdd
assert!(bddm.recv(Term::TOP));
assert_eq!(bddm.nodes, bddl.nodes);
// receiving some element works for middle -> last, but not last -> middle
assert!(bddm.recv(Term(2)));
assert!(bddl.recv(Term(2)));
assert_eq!(bddl.nodes.len(), 3);
assert!(!bddl.recv(Term(5)));
// get all elements into middle bdd1
assert!(!bddm.recv(Term(usize::MAX)));
assert_eq!(
bddm.nodes,
vec![
BddNode::bot_node(),
BddNode::top_node(),
BddNode::new(Var(0), Term(0), Term(1)),
BddNode::new(Var(1), Term(0), Term(1)),
BddNode::new(Var(0), Term(0), Term(3)),
BddNode::new(Var(1), Term(1), Term(0)),
BddNode::new(Var(0), Term(1), Term(5)),
BddNode::new(Var(2), Term(0), Term(1)),
BddNode::new(Var(2), Term(1), Term(0)),
BddNode::new(Var(1), Term(0), Term(7)),
BddNode::new(Var(0), Term(3), Term(1)),
BddNode::new(Var(0), Term(0), Term(9)),
]
);
// last bdd is still in the previous state
assert_eq!(
bddl.nodes,
vec![
BddNode::bot_node(),
BddNode::top_node(),
BddNode::new(Var(0), Term(0), Term(1)),
]
);
// and now catch up till 10
assert!(bddl.recv(Term(10)));
assert_eq!(
bddl.nodes,
vec![
BddNode::bot_node(),
BddNode::top_node(),
BddNode::new(Var(0), Term(0), Term(1)),
BddNode::new(Var(1), Term(0), Term(1)),
BddNode::new(Var(0), Term(0), Term(3)),
BddNode::new(Var(1), Term(1), Term(0)),
BddNode::new(Var(0), Term(1), Term(5)),
BddNode::new(Var(2), Term(0), Term(1)),
BddNode::new(Var(2), Term(1), Term(0)),
BddNode::new(Var(1), Term(0), Term(7)),
BddNode::new(Var(0), Term(3), Term(1)),
]
);
solving.join().expect("Both threads should terminate");
// asking for 10 again works too
assert!(bddl.recv(Term(10)));
// fully catch up with the last bdd
assert!(bddl.recv(Term(11)));
assert_eq!(bddl.nodes, bddm.nodes);
}
}

View File

@ -11,7 +11,7 @@ where
V: Serialize + 'a,
{
let container: Vec<_> = target.into_iter().collect();
serde::Serialize::serialize(&container, ser)
Serialize::serialize(&container, ser)
}
/// Deserialize from a [Vector][std::vec::Vec] to a [Map][std::collections::HashMap].
@ -22,6 +22,6 @@ where
K: Deserialize<'de>,
V: Deserialize<'de>,
{
let container: Vec<_> = serde::Deserialize::deserialize(des)?;
Ok(T::from_iter(container.into_iter()))
let container: Vec<_> = Deserialize::deserialize(des)?;
Ok(T::from_iter(container))
}

View File

@ -10,32 +10,38 @@ use nom::{
sequence::{delimited, preceded, separated_pair, terminated},
IResult,
};
use std::{cell::RefCell, collections::HashMap, rc::Rc};
use std::collections::HashMap;
use std::{
cell::RefCell,
sync::{Arc, RwLock},
};
use crate::datatypes::adf::VarContainer;
/// A representation of a formula, still using the strings from the input.
#[derive(Clone, PartialEq, Eq)]
pub enum Formula<'a> {
pub enum Formula {
/// `c(f)` in the input format.
Bot,
/// `c(v)` in the input format.
Top,
/// Some atomic variable in the input format.
Atom(&'a str),
Atom(String),
/// Negation of a subformula.
Not(Box<Formula<'a>>),
Not(Box<Formula>),
/// Conjunction of two subformulae.
And(Box<Formula<'a>>, Box<Formula<'a>>),
And(Box<Formula>, Box<Formula>),
/// Disjunction of two subformulae.
Or(Box<Formula<'a>>, Box<Formula<'a>>),
Or(Box<Formula>, Box<Formula>),
/// Implication of two subformulae.
Imp(Box<Formula<'a>>, Box<Formula<'a>>),
Imp(Box<Formula>, Box<Formula>),
/// Exclusive-Or of two subformulae.
Xor(Box<Formula<'a>>, Box<Formula<'a>>),
Xor(Box<Formula>, Box<Formula>),
/// If and only if connective between two formulae.
Iff(Box<Formula<'a>>, Box<Formula<'a>>),
Iff(Box<Formula>, Box<Formula>),
}
impl Formula<'_> {
impl Formula {
pub(crate) fn to_boolean_expr(
&self,
) -> biodivine_lib_bdd::boolean_expression::BooleanExpression {
@ -84,29 +90,29 @@ impl Formula<'_> {
}
}
impl std::fmt::Debug for Formula<'_> {
impl std::fmt::Debug for Formula {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Formula::Atom(a) => {
write!(f, "{}", a)?;
write!(f, "{a}")?;
}
Formula::Not(n) => {
write!(f, "not({:?})", n)?;
write!(f, "not({n:?})")?;
}
Formula::And(f1, f2) => {
write!(f, "and({:?},{:?})", f1, f2)?;
write!(f, "and({f1:?},{f2:?})")?;
}
Formula::Or(f1, f2) => {
write!(f, "or({:?},{:?})", f1, f2)?;
write!(f, "or({f1:?},{f2:?})")?;
}
Formula::Imp(f1, f2) => {
write!(f, "imp({:?},{:?})", f1, f2)?;
write!(f, "imp({f1:?},{f2:?})")?;
}
Formula::Xor(f1, f2) => {
write!(f, "xor({:?},{:?})", f1, f2)?;
write!(f, "xor({f1:?},{f2:?})")?;
}
Formula::Iff(f1, f2) => {
write!(f, "iff({:?},{:?})", f1, f2)?;
write!(f, "iff({f1:?},{f2:?})")?;
}
Formula::Bot => {
write!(f, "Const(B)")?;
@ -126,28 +132,29 @@ impl std::fmt::Debug for Formula<'_> {
///
/// Note that the parser can be utilised by an [ADF][`crate::adf::Adf`] to initialise it with minimal overhead.
#[derive(Debug)]
pub struct AdfParser<'a> {
namelist: Rc<RefCell<Vec<String>>>,
dict: Rc<RefCell<HashMap<String, usize>>>,
formulae: RefCell<Vec<Formula<'a>>>,
formulaname: RefCell<Vec<String>>,
pub struct AdfParser {
/// A name for each statement (identified by index in vector)
pub namelist: Arc<RwLock<Vec<String>>>,
/// Inverse mapping from name to index of statement in vector above
pub dict: Arc<RwLock<HashMap<String, usize>>>,
/// The formula (acceptance condition) for each statement identified by its index
pub formulae: RefCell<Vec<Formula>>,
/// The formula for each statement identified by its index
pub formulaname: RefCell<Vec<String>>,
}
impl Default for AdfParser<'_> {
impl Default for AdfParser {
fn default() -> Self {
AdfParser {
namelist: Rc::new(RefCell::new(Vec::new())),
dict: Rc::new(RefCell::new(HashMap::new())),
namelist: Arc::new(RwLock::new(Vec::new())),
dict: Arc::new(RwLock::new(HashMap::new())),
formulae: RefCell::new(Vec::new()),
formulaname: RefCell::new(Vec::new()),
}
}
}
impl<'a, 'b> AdfParser<'b>
where
'a: 'b,
{
impl<'a> AdfParser {
#[allow(dead_code)]
fn parse_statements(&'a self) -> impl FnMut(&'a str) -> IResult<&'a str, ()> {
move |input| {
@ -176,8 +183,14 @@ where
fn parse_statement(&'a self) -> impl FnMut(&'a str) -> IResult<&'a str, ()> {
|input| {
let mut dict = self.dict.borrow_mut();
let mut namelist = self.namelist.borrow_mut();
let mut dict = self
.dict
.write()
.expect("RwLock of dict could not get write access");
let mut namelist = self
.namelist
.write()
.expect("RwLock of namelist could not get write access");
let (remain, statement) =
terminated(AdfParser::statement, terminated(tag("."), multispace0))(input)?;
if !dict.contains_key(statement) {
@ -200,16 +213,31 @@ where
}
}
impl AdfParser<'_> {
impl AdfParser {
/// Creates a new parser, utilising the already existing [VarContainer]
pub fn with_var_container(var_container: VarContainer) -> AdfParser {
AdfParser {
namelist: var_container.names(),
dict: var_container.mappings(),
formulae: RefCell::new(Vec::new()),
formulaname: RefCell::new(Vec::new()),
}
}
}
impl AdfParser {
/// after an update to the namelist, all indizes are updated
fn regenerate_indizes(&self) {
self.namelist
.as_ref()
.borrow()
.read()
.expect("ReadLock on namelist failed")
.iter()
.enumerate()
.for_each(|(i, elem)| {
self.dict.as_ref().borrow_mut().insert(elem.clone(), i);
self.dict
.write()
.expect("WriteLock on dict failed")
.insert(elem.clone(), i);
});
}
@ -217,7 +245,10 @@ impl AdfParser<'_> {
/// Results, which got used before might become corrupted.
/// Ensure that all used data is physically copied.
pub fn varsort_lexi(&self) -> &Self {
self.namelist.as_ref().borrow_mut().sort_unstable();
self.namelist
.write()
.expect("WriteLock on namelist failed")
.sort_unstable();
self.regenerate_indizes();
self
}
@ -227,8 +258,8 @@ impl AdfParser<'_> {
/// Ensure that all used data is physically copied.
pub fn varsort_alphanum(&self) -> &Self {
self.namelist
.as_ref()
.borrow_mut()
.write()
.expect("WriteLock on namelist failed")
.string_sort_unstable(natural_lexical_cmp);
self.regenerate_indizes();
self
@ -254,7 +285,7 @@ impl AdfParser<'_> {
}
fn atomic_term(input: &str) -> IResult<&str, Formula> {
AdfParser::atomic(input).map(|(input, result)| (input, Formula::Atom(result)))
AdfParser::atomic(input).map(|(input, result)| (input, Formula::Atom(result.to_string())))
}
fn formula(input: &str) -> IResult<&str, Formula> {
@ -343,14 +374,18 @@ impl AdfParser<'_> {
/// Allows insight of the number of parsed statements.
pub fn dict_size(&self) -> usize {
//self.dict.borrow().len()
self.dict.as_ref().borrow().len()
self.dict.read().expect("ReadLock on dict failed").len()
}
/// Returns the number-representation and position of a given statement in string-representation.
///
/// Will return [None] if the string does no occur in the dictionary.
pub fn dict_value(&self, value: &str) -> Option<usize> {
self.dict.as_ref().borrow().get(value).copied()
self.dict
.read()
.expect("ReadLock on dict failed")
.get(value)
.copied()
}
/// Returns the acceptance condition of a statement at the given position.
@ -360,12 +395,17 @@ impl AdfParser<'_> {
self.formulae.borrow().get(idx).cloned()
}
pub(crate) fn dict_rc_refcell(&self) -> Rc<RefCell<HashMap<String, usize>>> {
Rc::clone(&self.dict)
pub(crate) fn dict(&self) -> Arc<RwLock<HashMap<String, usize>>> {
Arc::clone(&self.dict)
}
pub(crate) fn namelist_rc_refcell(&self) -> Rc<RefCell<Vec<String>>> {
Rc::clone(&self.namelist)
pub(crate) fn namelist(&self) -> Arc<RwLock<Vec<String>>> {
Arc::clone(&self.namelist)
}
/// Returns a [`VarContainer`][crate::datatypes::adf::VarContainer] which allows to access the variable information gathered by the parser
pub fn var_container(&self) -> VarContainer {
VarContainer::from_parser(self.namelist(), self.dict())
}
pub(crate) fn formula_count(&self) -> usize {
@ -379,8 +419,8 @@ impl AdfParser<'_> {
.map(|name| {
*self
.dict
.as_ref()
.borrow()
.read()
.expect("ReadLock on dict failed")
.get(name)
.expect("Dictionary should contain all the used formulanames")
})
@ -441,7 +481,7 @@ mod test {
let (_remain, result) = AdfParser::formula(input).unwrap();
assert_eq!(
format!("{:?}", result),
format!("{result:?}"),
"and(or(not(a),iff( iff left ,b)),xor(imp(c,d),e))"
);
@ -465,7 +505,10 @@ mod test {
assert_eq!(parser.dict_value("b"), Some(2usize));
assert_eq!(
format!("{:?}", parser.ac_at(1).unwrap()),
format!("{:?}", Formula::Not(Box::new(Formula::Atom("a"))))
format!(
"{:?}",
Formula::Not(Box::new(Formula::Atom("a".to_string())))
)
);
assert_eq!(parser.formula_count(), 3);
assert_eq!(parser.formula_order(), vec![0, 2, 1]);

View File

@ -1,5 +1,5 @@
#[test]
fn {name}() {{
fn {name}_biodivine() {{
let resource = "{path}";
log::debug!("resource: {{}}", resource);
let grounded = "{grounded}";
@ -18,3 +18,23 @@ fn {name}() {{
);
}}
#[test]
fn {name}_naive() {{
let resource = "{path}";
log::debug!("resource: {{}}", resource);
let grounded = "{grounded}";
log::debug!("Grounded: {{}}", grounded);
let parser = AdfParser::default();
let expected_result = std::fs::read_to_string(grounded);
assert!(expected_result.is_ok());
let input = std::fs::read_to_string(resource).unwrap();
parser.parse()(&input).unwrap();
parser.varsort_alphanum();
let mut adf = adf_bdd::adf::Adf::from_parser(&parser);
let grounded = adf.grounded();
assert_eq!(
format!("{{}}", adf.print_interpretation(&grounded)),
format!("{{}}\n",expected_result.unwrap())
);
}}

32
server/Cargo.toml Normal file
View File

@ -0,0 +1,32 @@
[package]
name = "adf-bdd-server"
version = "0.3.0"
authors = ["Lukas Gerlach <lukas.gerlach@tu-dresden.de>"]
edition = "2021"
homepage = "https://ellmau.github.io/adf-obdd"
repository = "https://github.com/ellmau/adf-obdd"
license = "MIT"
exclude = ["res/", "./flake*", "*.nix", ".envrc", "_config.yml", "tarpaulin-report.*", "*~"]
description = "Offer Solving ADFs as a service"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
adf_bdd = { version="0.3.1", path="../lib", features = ["frontend"] }
actix-web = "4"
actix-cors = "0.6"
actix-files = "0.6"
env_logger = "0.9"
log = "0.4"
serde = "1"
mongodb = "2.4.0"
actix-identity = "0.5.2"
argon2 = "0.5.0"
actix-session = { version="0.7.2", features = ["cookie-session"] }
names = "0.14.0"
futures-util = "0.3.28"
actix-multipart = "0.6.0"
[features]
cors_for_local_development = []
mock_long_computations = []

13
server/README.md Normal file
View File

@ -0,0 +1,13 @@
# Backend for Webservice
This directory contains the backend for <https://adf-bdd.dev> built using actix.rs.
## Usage
For local development run:
- `docker compose up` to run a MongoDB including a web admin interface
- `MONGODB_URI=mongodb://root:example@localhost:27017/ cargo run -F cors_for_local_development -F mock_long_computations` to start the server, connecting it to the MongoDB and allowing CORS from the frontend (running on a separate development server)
The server listens on `localhost:8080`.
The feature flag `-F mock_long_computations` is optional and just mimics longer computation times by using `std::thread::sleep`. This can be helpful to check how the frontend will behave in such cases.

24
server/docker-compose.yml Normal file
View File

@ -0,0 +1,24 @@
version: '3.1'
services:
mongo:
image: mongo:6
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
volumes:
- ./mongodb-data:/data/db
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: root
ME_CONFIG_MONGODB_ADMINPASSWORD: example
ME_CONFIG_MONGODB_URL: mongodb://root:example@mongo:27017/

848
server/src/adf.rs Normal file
View File

@ -0,0 +1,848 @@
use std::cell::RefCell;
use std::collections::{HashMap, HashSet};
use std::sync::{Arc, RwLock};
#[cfg(feature = "mock_long_computations")]
use std::time::Duration;
use actix_identity::Identity;
use actix_multipart::form::{tempfile::TempFile, text::Text, MultipartForm};
use actix_web::rt::spawn;
use actix_web::rt::task::spawn_blocking;
use actix_web::rt::time::timeout;
use actix_web::{delete, get, post, put, web, HttpMessage, HttpRequest, HttpResponse, Responder};
use adf_bdd::datatypes::adf::VarContainer;
use adf_bdd::datatypes::{BddNode, Term, Var};
use futures_util::{FutureExt, TryStreamExt};
use mongodb::bson::doc;
use mongodb::bson::{to_bson, Bson};
use mongodb::results::DeleteResult;
use names::{Generator, Name};
use serde::{Deserialize, Serialize};
use adf_bdd::adf::Adf;
use adf_bdd::adfbiodivine::Adf as BdAdf;
use adf_bdd::obdd::Bdd;
use adf_bdd::parser::{AdfParser, Formula};
use crate::config::{AppState, RunningInfo, Task, ADF_COLL, COMPUTE_TIME, DB_NAME, USER_COLL};
use crate::user::{username_exists, User};
use crate::double_labeled_graph::DoubleLabeledGraph;
type Ac = Vec<Term>;
type AcDb = Vec<String>;
#[derive(Copy, Clone, Debug, Deserialize, Serialize)]
pub(crate) enum Parsing {
Naive,
Hybrid,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, Deserialize, Serialize)]
pub(crate) enum Strategy {
Ground,
Complete,
Stable,
StableCountingA,
StableCountingB,
StableNogood,
}
#[derive(Clone, Deserialize, Serialize)]
pub(crate) struct AcAndGraph {
pub(crate) ac: AcDb,
pub(crate) graph: DoubleLabeledGraph,
}
impl From<AcAndGraph> for Bson {
fn from(source: AcAndGraph) -> Self {
to_bson(&source).expect("Serialization should work")
}
}
#[derive(Clone, Default, Deserialize, Serialize)]
#[serde(tag = "type", content = "content")]
pub(crate) enum OptionWithError<T> {
Some(T),
Error(String),
#[default]
None,
}
impl<T> OptionWithError<T> {
fn is_some(&self) -> bool {
matches!(self, Self::Some(_))
}
}
impl<T: Serialize> From<OptionWithError<T>> for Bson {
fn from(source: OptionWithError<T>) -> Self {
to_bson(&source).expect("Serialization should work")
}
}
type AcsAndGraphsOpt = OptionWithError<Vec<AcAndGraph>>;
#[derive(Default, Deserialize, Serialize)]
pub(crate) struct AcsPerStrategy {
pub(crate) parse_only: AcsAndGraphsOpt,
pub(crate) ground: AcsAndGraphsOpt,
pub(crate) complete: AcsAndGraphsOpt,
pub(crate) stable: AcsAndGraphsOpt,
pub(crate) stable_counting_a: AcsAndGraphsOpt,
pub(crate) stable_counting_b: AcsAndGraphsOpt,
pub(crate) stable_nogood: AcsAndGraphsOpt,
}
#[derive(Clone, Deserialize, Serialize)]
pub(crate) struct VarContainerDb {
names: Vec<String>,
mapping: HashMap<String, String>,
}
impl From<VarContainer> for VarContainerDb {
fn from(source: VarContainer) -> Self {
Self {
names: source.names().read().unwrap().clone(),
mapping: source
.mappings()
.read()
.unwrap()
.iter()
.map(|(k, v)| (k.clone(), v.to_string()))
.collect(),
}
}
}
impl From<VarContainerDb> for VarContainer {
fn from(source: VarContainerDb) -> Self {
Self::from_parser(
Arc::new(RwLock::new(source.names)),
Arc::new(RwLock::new(
source
.mapping
.into_iter()
.map(|(k, v)| (k, v.parse().unwrap()))
.collect(),
)),
)
}
}
#[derive(Clone, Deserialize, Serialize)]
pub(crate) struct BddNodeDb {
var: String,
lo: String,
hi: String,
}
impl From<BddNode> for BddNodeDb {
fn from(source: BddNode) -> Self {
Self {
var: source.var().0.to_string(),
lo: source.lo().0.to_string(),
hi: source.hi().0.to_string(),
}
}
}
impl From<BddNodeDb> for BddNode {
fn from(source: BddNodeDb) -> Self {
Self::new(
Var(source.var.parse().unwrap()),
Term(source.lo.parse().unwrap()),
Term(source.hi.parse().unwrap()),
)
}
}
type SimplifiedBdd = Vec<BddNodeDb>;
#[derive(Clone, Deserialize, Serialize)]
pub(crate) struct SimplifiedAdf {
pub(crate) ordering: VarContainerDb,
pub(crate) bdd: SimplifiedBdd,
pub(crate) ac: AcDb,
}
impl From<Adf> for SimplifiedAdf {
fn from(source: Adf) -> Self {
Self {
ordering: source.ordering.into(),
bdd: source.bdd.nodes.into_iter().map(Into::into).collect(),
ac: source.ac.into_iter().map(|t| t.0.to_string()).collect(),
}
}
}
impl From<SimplifiedAdf> for Adf {
fn from(source: SimplifiedAdf) -> Self {
let bdd = Bdd::from(
source
.bdd
.into_iter()
.map(Into::into)
.collect::<Vec<BddNode>>(),
);
Adf::from((
source.ordering.into(),
bdd,
source
.ac
.into_iter()
.map(|t| Term(t.parse().unwrap()))
.collect(),
))
}
}
type SimplifiedAdfOpt = OptionWithError<SimplifiedAdf>;
#[derive(Deserialize, Serialize)]
pub(crate) struct AdfProblem {
pub(crate) name: String,
pub(crate) username: String,
pub(crate) code: String,
pub(crate) parsing_used: Parsing,
#[serde(default)]
pub(crate) is_af: bool,
pub(crate) adf: SimplifiedAdfOpt,
pub(crate) acs_per_strategy: AcsPerStrategy,
}
#[derive(MultipartForm)]
struct AddAdfProblemBodyMultipart {
name: Text<String>,
code: Option<Text<String>>, // Either Code or File is set
file: Option<TempFile>, // Either Code or File is set
parsing: Text<Parsing>,
is_af: Text<bool>, // if its not an AF then it is an ADF
}
#[derive(Clone)]
struct AddAdfProblemBodyPlain {
name: String,
code: String,
parsing: Parsing,
is_af: bool, // if its not an AF then it is an ADF
}
impl TryFrom<AddAdfProblemBodyMultipart> for AddAdfProblemBodyPlain {
type Error = &'static str;
fn try_from(source: AddAdfProblemBodyMultipart) -> Result<Self, Self::Error> {
Ok(Self {
name: source.name.into_inner(),
code: source
.file
.map(|f| std::io::read_to_string(f.file).expect("TempFile should be readable"))
.or_else(|| source.code.map(|c| c.into_inner()))
.and_then(|code| (!code.is_empty()).then_some(code))
.ok_or("Either a file or the code has to be provided.")?,
parsing: source.parsing.into_inner(),
is_af: source.is_af.into_inner(),
})
}
}
async fn adf_problem_exists(
adf_coll: &mongodb::Collection<AdfProblem>,
name: &str,
username: &str,
) -> bool {
adf_coll
.find_one(doc! { "name": name, "username": username }, None)
.await
.ok()
.flatten()
.is_some()
}
#[derive(Serialize)]
struct AdfProblemInfo {
name: String,
code: String,
parsing_used: Parsing,
is_af: bool,
acs_per_strategy: AcsPerStrategy,
running_tasks: Vec<Task>,
}
impl AdfProblemInfo {
fn from_adf_prob_and_tasks(adf: AdfProblem, tasks: &HashSet<RunningInfo>) -> Self {
AdfProblemInfo {
name: adf.name.clone(),
code: adf.code,
parsing_used: adf.parsing_used,
is_af: adf.is_af,
acs_per_strategy: adf.acs_per_strategy,
running_tasks: tasks
.iter()
.filter_map(|t| {
(t.adf_name == adf.name && t.username == adf.username).then_some(t.task)
})
.collect(),
}
}
}
struct AF(Vec<Vec<usize>>);
impl From<AF> for AdfParser {
fn from(source: AF) -> Self {
let names: Vec<String> = (0..source.0.len())
.map(|val| (val + 1).to_string())
.collect();
let dict: HashMap<String, usize> = names
.iter()
.enumerate()
.map(|(i, val)| (val.clone(), i))
.collect();
let formulae: Vec<Formula> = source
.0
.into_iter()
.map(|attackers| {
attackers.into_iter().fold(Formula::Top, |acc, attacker| {
Formula::And(
Box::new(acc),
Box::new(Formula::Not(Box::new(Formula::Atom(
(attacker + 1).to_string(),
)))),
)
})
})
.collect();
let formulanames = names.clone();
Self {
namelist: Arc::new(RwLock::new(names)),
dict: Arc::new(RwLock::new(dict)),
formulae: RefCell::new(formulae),
formulaname: RefCell::new(formulanames),
}
}
}
fn parse_af(code: String) -> Result<AdfParser, &'static str> {
let mut lines = code.lines();
let Some(first_line) = lines.next() else {
return Err("There must be at least one line in the AF input.");
};
let first_line: Vec<_> = first_line.split(" ").collect();
if first_line[0] != "p" || first_line[1] != "af" {
return Err("Expected first line to be of the form: p af <n>");
}
let Ok(num_arguments) = first_line[2].parse::<usize>() else {
return Err("Could not convert number of arguments to u32; expected first line to be of the form: p af <n>");
};
let attacks_opt: Option<Vec<(usize, usize)>> = lines
.filter(|line| !line.starts_with('#') && !line.is_empty())
.map(|line| {
let mut line = line.split(" ");
let a = line.next()?;
let b = line.next()?;
if line.next().is_some() {
None
} else {
Some((a.parse::<usize>().ok()?, b.parse::<usize>().ok()?))
}
})
.collect();
let Some(attacks) = attacks_opt else {
return Err("Line must be of the form: n m");
};
// index in outer vector represents attacked element
let mut is_attacked_by: Vec<Vec<usize>> = vec![vec![]; num_arguments];
for (a, b) in attacks {
is_attacked_by[b - 1].push(a - 1); // we normalize names to be zero-indexed
}
let hacked_adf_parser = AdfParser::from(AF(is_attacked_by));
Ok(hacked_adf_parser)
}
#[post("/add")]
async fn add_adf_problem(
req: HttpRequest,
app_state: web::Data<AppState>,
identity: Option<Identity>,
req_body: MultipartForm<AddAdfProblemBodyMultipart>,
) -> impl Responder {
let adf_problem_input: AddAdfProblemBodyPlain = match req_body.into_inner().try_into() {
Ok(input) => input,
Err(err) => return HttpResponse::BadRequest().body(err),
};
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
let user_coll: mongodb::Collection<User> = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
let username = match identity.map(|id| id.id()) {
None => {
// Create and log in temporary user
let gen = Generator::with_naming(Name::Numbered);
let candidates = gen.take(10);
let mut name: Option<String> = None;
for candidate in candidates {
if name.is_some() {
continue;
}
if !(username_exists(&user_coll, &candidate).await) {
name = Some(candidate);
}
}
let username = match name {
Some(name) => name,
None => {
return HttpResponse::InternalServerError().body("Could not generate new name.")
}
};
match user_coll
.insert_one(
User {
username: username.clone(),
password: None,
},
None,
)
.await
{
Ok(_) => (),
Err(err) => return HttpResponse::InternalServerError().body(err.to_string()),
}
Identity::login(&req.extensions(), username.clone()).unwrap();
username
}
Some(Err(err)) => return HttpResponse::InternalServerError().body(err.to_string()),
Some(Ok(username)) => username,
};
let problem_name = if !adf_problem_input.name.is_empty() {
if adf_problem_exists(&adf_coll, &adf_problem_input.name, &username).await {
return HttpResponse::Conflict()
.body("ADF Problem with that name already exists. Please pick another one!");
}
adf_problem_input.name.clone()
} else {
let gen = Generator::with_naming(Name::Numbered);
let candidates = gen.take(10);
let mut name: Option<String> = None;
for candidate in candidates {
if name.is_some() {
continue;
}
if !(adf_problem_exists(&adf_coll, &candidate, &username).await) {
name = Some(candidate);
}
}
match name {
Some(name) => name,
None => {
return HttpResponse::InternalServerError().body("Could not generate new name.")
}
}
};
let adf_problem: AdfProblem = AdfProblem {
name: problem_name.clone(),
username: username.clone(),
code: adf_problem_input.code.clone(),
parsing_used: adf_problem_input.parsing,
is_af: adf_problem_input.is_af,
adf: SimplifiedAdfOpt::None,
acs_per_strategy: AcsPerStrategy::default(),
};
let result = adf_coll.insert_one(&adf_problem, None).await;
if let Err(err) = result {
return HttpResponse::InternalServerError()
.body(format!("Could not create Database entry. Error: {err}"));
}
let username_clone = username.clone();
let problem_name_clone = problem_name.clone();
let adf_fut = timeout(
COMPUTE_TIME,
spawn_blocking(move || {
let running_info = RunningInfo {
username: username_clone,
adf_name: problem_name_clone,
task: Task::Parse,
};
app_state
.currently_running
.lock()
.unwrap()
.insert(running_info.clone());
#[cfg(feature = "mock_long_computations")]
std::thread::sleep(Duration::from_secs(20));
let (parser, parse_result) = {
if adf_problem_input.is_af {
parse_af(adf_problem_input.code)
.map(|p| (p, Ok(())))
.unwrap_or_else(|e| (AdfParser::default(), Err(e)))
} else {
let parser = AdfParser::default();
let parse_result = parser.parse()(&adf_problem_input.code)
.map(|_| ())
.map_err(|_| "ADF could not be parsed, double check your input!");
(parser, parse_result)
}
};
let result = parse_result.map(|_| {
let lib_adf = match adf_problem_input.parsing {
Parsing::Naive => Adf::from_parser(&parser),
Parsing::Hybrid => {
let bd_adf = BdAdf::from_parser(&parser);
bd_adf.hybrid_step_opt(false)
}
};
let ac_and_graph = AcAndGraph {
ac: lib_adf.ac.iter().map(|t| t.0.to_string()).collect(),
graph: DoubleLabeledGraph::from_adf_and_ac(&lib_adf, None),
};
(SimplifiedAdf::from(lib_adf), ac_and_graph)
});
app_state
.currently_running
.lock()
.unwrap()
.remove(&running_info);
result
}),
);
spawn(adf_fut.then(move |adf_res| async move {
let (adf, ac_and_graph): (SimplifiedAdfOpt, AcsAndGraphsOpt) = match adf_res {
Err(err) => (
SimplifiedAdfOpt::Error(err.to_string()),
AcsAndGraphsOpt::Error(err.to_string()),
),
Ok(Err(err)) => (
SimplifiedAdfOpt::Error(err.to_string()),
AcsAndGraphsOpt::Error(err.to_string()),
),
Ok(Ok(Err(err))) => (
SimplifiedAdfOpt::Error(err.to_string()),
AcsAndGraphsOpt::Error(err.to_string()),
),
Ok(Ok(Ok((adf, ac_and_graph)))) => (
SimplifiedAdfOpt::Some(adf),
AcsAndGraphsOpt::Some(vec![ac_and_graph]),
),
};
let result = adf_coll
.update_one(
doc! { "name": problem_name, "username": username },
doc! { "$set": { "adf": &adf, "acs_per_strategy.parse_only": &ac_and_graph } },
None,
)
.await;
if let Err(err) = result {
log::error!("{err}");
}
}));
HttpResponse::Ok().body("Parsing started...")
}
#[derive(Deserialize)]
struct SolveAdfProblemBody {
strategy: Strategy,
}
#[put("/{problem_name}/solve")]
async fn solve_adf_problem(
app_state: web::Data<AppState>,
identity: Option<Identity>,
path: web::Path<String>,
req_body: web::Json<SolveAdfProblemBody>,
) -> impl Responder {
let problem_name = path.into_inner();
let adf_problem_input: SolveAdfProblemBody = req_body.into_inner();
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
let username = match identity.map(|id| id.id()) {
Option::None => {
return HttpResponse::Unauthorized().body("You need to login to add an ADF problem.")
}
Some(Err(err)) => return HttpResponse::InternalServerError().body(err.to_string()),
Some(Ok(username)) => username,
};
let adf_problem = match adf_coll
.find_one(doc! { "name": &problem_name, "username": &username }, None)
.await
{
Err(err) => return HttpResponse::InternalServerError().body(err.to_string()),
Ok(Option::None) => {
return HttpResponse::NotFound()
.body(format!("ADF problem with name {problem_name} not found."))
}
Ok(Some(prob)) => prob,
};
let simp_adf: SimplifiedAdf = match adf_problem.adf {
SimplifiedAdfOpt::None => {
return HttpResponse::BadRequest().body("The ADF problem has not been parsed yet.")
}
SimplifiedAdfOpt::Error(err) => {
return HttpResponse::BadRequest().body(format!(
"The ADF problem could not be parsed. Update it and try again. Error: {err}"
))
}
SimplifiedAdfOpt::Some(adf) => adf,
};
let has_been_solved = match adf_problem_input.strategy {
Strategy::Complete => adf_problem.acs_per_strategy.complete.is_some(),
Strategy::Ground => adf_problem.acs_per_strategy.ground.is_some(),
Strategy::Stable => adf_problem.acs_per_strategy.stable.is_some(),
Strategy::StableCountingA => adf_problem.acs_per_strategy.stable_counting_a.is_some(),
Strategy::StableCountingB => adf_problem.acs_per_strategy.stable_counting_b.is_some(),
Strategy::StableNogood => adf_problem.acs_per_strategy.stable_nogood.is_some(),
};
let username_clone = username.clone();
let problem_name_clone = problem_name.clone();
let running_info = RunningInfo {
username: username_clone,
adf_name: problem_name_clone,
task: Task::Solve(adf_problem_input.strategy),
};
// NOTE: we could also return the result here instead of throwing an error but I think the canonical way should just be to call the get endpoint for the problem.
if has_been_solved
|| app_state
.currently_running
.lock()
.unwrap()
.contains(&running_info)
{
return HttpResponse::Conflict()
.body("The ADF problem has already been solved with this strategy. You can just get the solution from the problem data directly.");
}
let acs_and_graphs_fut = timeout(
COMPUTE_TIME,
spawn_blocking(move || {
app_state
.currently_running
.lock()
.unwrap()
.insert(running_info.clone());
#[cfg(feature = "mock_long_computations")]
std::thread::sleep(Duration::from_secs(20));
let mut adf: Adf = simp_adf.into();
let acs: Vec<Ac> = match adf_problem_input.strategy {
Strategy::Complete => adf.complete().collect(),
Strategy::Ground => vec![adf.grounded()],
Strategy::Stable => adf.stable().collect(),
// TODO: INPUT VALIDATION: only allow this for hybrid parsing
Strategy::StableCountingA => adf.stable_count_optimisation_heu_a().collect(),
// TODO: INPUT VALIDATION: only allow this for hybrid parsing
Strategy::StableCountingB => adf.stable_count_optimisation_heu_b().collect(),
// TODO: support more than just default heuristics
Strategy::StableNogood => adf
.stable_nogood(adf_bdd::adf::heuristics::Heuristic::default())
.collect(),
};
let acs_and_graphs: Vec<AcAndGraph> = acs
.iter()
.map(|ac| AcAndGraph {
ac: ac.iter().map(|t| t.0.to_string()).collect(),
graph: DoubleLabeledGraph::from_adf_and_ac(&adf, Some(ac)),
})
.collect();
app_state
.currently_running
.lock()
.unwrap()
.remove(&running_info);
acs_and_graphs
}),
);
spawn(acs_and_graphs_fut.then(move |acs_and_graphs_res| async move {
let acs_and_graphs_enum: AcsAndGraphsOpt = match acs_and_graphs_res {
Err(err) => AcsAndGraphsOpt::Error(err.to_string()),
Ok(Err(err)) => AcsAndGraphsOpt::Error(err.to_string()),
Ok(Ok(acs_and_graphs)) => AcsAndGraphsOpt::Some(acs_and_graphs),
};
let result = adf_coll.update_one(doc! { "name": problem_name, "username": username }, match adf_problem_input.strategy {
Strategy::Complete => doc! { "$set": { "acs_per_strategy.complete": &acs_and_graphs_enum } },
Strategy::Ground => doc! { "$set": { "acs_per_strategy.ground": &acs_and_graphs_enum } },
Strategy::Stable => doc! { "$set": { "acs_per_strategy.stable": &acs_and_graphs_enum } },
Strategy::StableCountingA => doc! { "$set": { "acs_per_strategy.stable_counting_a": &acs_and_graphs_enum } },
Strategy::StableCountingB => doc! { "$set": { "acs_per_strategy.stable_counting_b": &acs_and_graphs_enum } },
Strategy::StableNogood => doc! { "$set": { "acs_per_strategy.stable_nogood": &acs_and_graphs_enum } },
}, None).await;
if let Err(err) = result {
log::error!("{err}");
}
}));
HttpResponse::Ok().body("Solving started...")
}
#[get("/{problem_name}")]
async fn get_adf_problem(
app_state: web::Data<AppState>,
identity: Option<Identity>,
path: web::Path<String>,
) -> impl Responder {
let problem_name = path.into_inner();
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
let username = match identity.map(|id| id.id()) {
Option::None => {
return HttpResponse::Unauthorized().body("You need to login to get an ADF problem.")
}
Some(Err(err)) => return HttpResponse::InternalServerError().body(err.to_string()),
Some(Ok(username)) => username,
};
let adf_problem = match adf_coll
.find_one(doc! { "name": &problem_name, "username": &username }, None)
.await
{
Err(err) => return HttpResponse::InternalServerError().body(err.to_string()),
Ok(Option::None) => {
return HttpResponse::NotFound()
.body(format!("ADF problem with name {problem_name} not found."))
}
Ok(Some(prob)) => prob,
};
HttpResponse::Ok().json(AdfProblemInfo::from_adf_prob_and_tasks(
adf_problem,
&app_state.currently_running.lock().unwrap(),
))
}
#[delete("/{problem_name}")]
async fn delete_adf_problem(
app_state: web::Data<AppState>,
identity: Option<Identity>,
path: web::Path<String>,
) -> impl Responder {
let problem_name = path.into_inner();
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
let username = match identity.map(|id| id.id()) {
Option::None => {
return HttpResponse::Unauthorized().body("You need to login to get an ADF problem.")
}
Some(Err(err)) => return HttpResponse::InternalServerError().body(err.to_string()),
Some(Ok(username)) => username,
};
match adf_coll
.delete_one(doc! { "name": &problem_name, "username": &username }, None)
.await
{
Ok(DeleteResult {
deleted_count: 0, ..
}) => HttpResponse::InternalServerError().body("Adf Problem could not be deleted."),
Ok(DeleteResult {
deleted_count: 1, ..
}) => HttpResponse::Ok().body("Adf Problem deleted."),
Ok(_) => {
unreachable!("delete_one removes at most one entry so all cases are covered already")
}
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
}
}
#[get("/")]
async fn get_adf_problems_for_user(
app_state: web::Data<AppState>,
identity: Option<Identity>,
) -> impl Responder {
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
let username = match identity.map(|id| id.id()) {
Option::None => {
return HttpResponse::Unauthorized().body("You need to login to get an ADF problem.")
}
Some(Err(err)) => return HttpResponse::InternalServerError().body(err.to_string()),
Some(Ok(username)) => username,
};
let adf_problem_cursor = match adf_coll.find(doc! { "username": &username }, None).await {
Err(err) => return HttpResponse::InternalServerError().body(err.to_string()),
Ok(cursor) => cursor,
};
let adf_problems: Vec<AdfProblemInfo> = match adf_problem_cursor
.map_ok(|adf_problem| {
AdfProblemInfo::from_adf_prob_and_tasks(
adf_problem,
&app_state.currently_running.lock().unwrap(),
)
})
.try_collect()
.await
{
Err(err) => return HttpResponse::InternalServerError().body(err.to_string()),
Ok(probs) => probs,
};
HttpResponse::Ok().json(adf_problems)
}

37
server/src/config.rs Normal file
View File

@ -0,0 +1,37 @@
use std::collections::HashSet;
use std::sync::Mutex;
use std::time::Duration;
use mongodb::Client;
use serde::Serialize;
use crate::adf::Strategy;
pub(crate) const COOKIE_DURATION: actix_web::cookie::time::Duration =
actix_web::cookie::time::Duration::minutes(30);
pub(crate) const COMPUTE_TIME: Duration = Duration::from_secs(120);
pub(crate) const ASSET_DIRECTORY: &str = "./assets";
pub(crate) const DB_NAME: &str = "adf-obdd";
pub(crate) const USER_COLL: &str = "users";
pub(crate) const ADF_COLL: &str = "adf-problems";
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, Serialize)]
#[serde(tag = "type", content = "content")]
pub(crate) enum Task {
Parse,
Solve(Strategy),
}
#[derive(Clone, Debug, PartialEq, Eq, Hash)]
pub(crate) struct RunningInfo {
pub(crate) username: String,
pub(crate) adf_name: String,
pub(crate) task: Task,
}
pub(crate) struct AppState {
pub(crate) mongodb_client: Client,
pub(crate) currently_running: Mutex<HashSet<RunningInfo>>,
}

View File

@ -0,0 +1,118 @@
use serde::{Deserialize, Serialize};
use std::collections::{HashMap, HashSet};
use adf_bdd::adf::Adf;
use adf_bdd::datatypes::{Term, Var};
#[derive(Clone, Deserialize, Serialize, Debug)]
/// This is a DTO for the graph output
pub struct DoubleLabeledGraph {
// number of nodes equals the number of node labels
// nodes implicitly have their index as their ID
node_labels: HashMap<String, String>,
// every node gets this label containing multiple entries (it might be empty)
tree_root_labels: HashMap<String, Vec<String>>,
lo_edges: Vec<(String, String)>,
hi_edges: Vec<(String, String)>,
}
impl DoubleLabeledGraph {
pub fn from_adf_and_ac(adf: &Adf, ac: Option<&Vec<Term>>) -> Self {
let ac: &Vec<Term> = match ac {
Some(ac) => ac,
None => &adf.ac,
};
let mut node_indices: HashSet<usize> = HashSet::new();
let mut new_node_indices: HashSet<usize> = ac.iter().map(|term| term.value()).collect();
while !new_node_indices.is_empty() {
node_indices = node_indices.union(&new_node_indices).copied().collect();
new_node_indices = HashSet::new();
for node_index in &node_indices {
let lo_node_index = adf.bdd.nodes[*node_index].lo().value();
if !node_indices.contains(&lo_node_index) {
new_node_indices.insert(lo_node_index);
}
let hi_node_index = adf.bdd.nodes[*node_index].hi().value();
if !node_indices.contains(&hi_node_index) {
new_node_indices.insert(hi_node_index);
}
}
}
let node_labels: HashMap<String, String> = adf
.bdd
.nodes
.iter()
.enumerate()
.filter(|(i, _)| node_indices.contains(i))
.map(|(i, &node)| {
let value_part = match node.var() {
Var::TOP => "TOP".to_string(),
Var::BOT => "BOT".to_string(),
_ => adf.ordering.name(node.var()).expect(
"name for each var should exist; special cases are handled separately",
),
};
(i.to_string(), value_part)
})
.collect();
let tree_root_labels_with_usize: HashMap<usize, Vec<String>> = ac.iter().enumerate().fold(
adf.bdd
.nodes
.iter()
.enumerate()
.filter(|(i, _)| node_indices.contains(i))
.map(|(i, _)| (i, vec![]))
.collect(),
|mut acc, (root_for, root_node)| {
acc.get_mut(&root_node.value())
.expect("we know that the index will be in the map")
.push(adf.ordering.name(Var(root_for)).expect(
"name for each var should exist; special cases are handled separately",
));
acc
},
);
let tree_root_labels: HashMap<String, Vec<String>> = tree_root_labels_with_usize
.into_iter()
.map(|(i, vec)| (i.to_string(), vec))
.collect();
let lo_edges: Vec<(String, String)> = adf
.bdd
.nodes
.iter()
.enumerate()
.filter(|(i, _)| node_indices.contains(i))
.filter(|(_, node)| ![Var::TOP, Var::BOT].contains(&node.var()))
.map(|(i, &node)| (i, node.lo().value()))
.map(|(i, v)| (i.to_string(), v.to_string()))
.collect();
let hi_edges: Vec<(String, String)> = adf
.bdd
.nodes
.iter()
.enumerate()
.filter(|(i, _)| node_indices.contains(i))
.filter(|(_, node)| ![Var::TOP, Var::BOT].contains(&node.var()))
.map(|(i, &node)| (i, node.hi().value()))
.map(|(i, v)| (i.to_string(), v.to_string()))
.collect();
DoubleLabeledGraph {
node_labels,
tree_root_labels,
lo_edges,
hi_edges,
}
}
}

116
server/src/main.rs Normal file
View File

@ -0,0 +1,116 @@
use std::collections::HashSet;
use std::sync::Mutex;
use actix_files as fs;
use actix_identity::IdentityMiddleware;
use actix_session::config::PersistentSession;
use actix_session::storage::CookieSessionStore;
use actix_session::SessionMiddleware;
use actix_web::cookie::Key;
use actix_web::dev::{fn_service, ServiceRequest, ServiceResponse};
use actix_web::{web, App, HttpServer};
use fs::NamedFile;
use mongodb::Client;
#[cfg(feature = "cors_for_local_development")]
use actix_cors::Cors;
mod adf;
mod config;
mod double_labeled_graph;
mod user;
use adf::{
add_adf_problem, delete_adf_problem, get_adf_problem, get_adf_problems_for_user,
solve_adf_problem,
};
use config::{AppState, ASSET_DIRECTORY, COOKIE_DURATION};
use user::{
create_username_index, delete_account, login, logout, register, update_user, user_info,
};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
env_logger::builder()
.filter_level(log::LevelFilter::Debug)
.init();
// setup mongodb
let mongodb_uri =
std::env::var("MONGODB_URI").unwrap_or_else(|_| "mongodb://localhost:27017".into());
let client = Client::with_uri_str(mongodb_uri)
.await
.expect("failed to connect to mongodb");
create_username_index(&client).await;
// cookie secret ket
let secret_key = Key::generate();
// needs to be set outside of httpserver closure to only create it once!
let app_data = web::Data::new(AppState {
mongodb_client: client.clone(),
currently_running: Mutex::new(HashSet::new()),
});
HttpServer::new(move || {
let app = App::new();
#[cfg(feature = "cors_for_local_development")]
let cors = Cors::default()
.allowed_origin("http://localhost:1234")
.allow_any_method()
.allow_any_header()
.supports_credentials()
.max_age(3600);
#[cfg(feature = "cors_for_local_development")]
let app = app.wrap(cors);
#[cfg(feature = "cors_for_local_development")]
let cookie_secure = false;
#[cfg(not(feature = "cors_for_local_development"))]
let cookie_secure = true;
app.app_data(app_data.clone())
.wrap(IdentityMiddleware::default())
.wrap(
SessionMiddleware::builder(CookieSessionStore::default(), secret_key.clone())
.cookie_name("adf-obdd-service-auth".to_owned())
.cookie_secure(cookie_secure)
.session_lifecycle(PersistentSession::default().session_ttl(COOKIE_DURATION))
.build(),
)
.service(
web::scope("/users")
.service(register)
.service(delete_account)
.service(login)
.service(logout)
.service(user_info)
.service(update_user),
)
.service(
web::scope("/adf")
.service(add_adf_problem)
.service(solve_adf_problem)
.service(get_adf_problem)
.service(delete_adf_problem)
.service(get_adf_problems_for_user),
)
// this mus be last to not override anything
.service(
fs::Files::new("/", ASSET_DIRECTORY)
.index_file("index.html")
.default_handler(fn_service(|req: ServiceRequest| async {
let (req, _) = req.into_parts();
let file =
NamedFile::open_async(format!("{ASSET_DIRECTORY}/index.html")).await?;
let res = file.into_response(&req);
Ok(ServiceResponse::new(req, res))
})),
)
})
.bind(("0.0.0.0", 8080))?
.run()
.await
}

365
server/src/user.rs Normal file
View File

@ -0,0 +1,365 @@
use actix_identity::Identity;
use actix_web::{delete, get, post, put, web, HttpMessage, HttpRequest, HttpResponse, Responder};
use argon2::password_hash::rand_core::OsRng;
use argon2::password_hash::SaltString;
use argon2::{Argon2, PasswordHash, PasswordHasher, PasswordVerifier};
use mongodb::results::{DeleteResult, UpdateResult};
use mongodb::{bson::doc, options::IndexOptions, Client, IndexModel};
use serde::{Deserialize, Serialize};
use crate::adf::AdfProblem;
use crate::config::{AppState, ADF_COLL, DB_NAME, USER_COLL};
#[derive(Deserialize, Serialize)]
pub(crate) struct User {
pub(crate) username: String,
pub(crate) password: Option<String>, // NOTE: Password being None indicates a temporary user
}
#[derive(Deserialize, Serialize)]
struct UserPayload {
username: String,
password: String,
}
#[derive(Deserialize, Serialize)]
struct UserInfo {
username: String,
temp: bool,
}
// Creates an index on the "username" field to force the values to be unique.
pub(crate) async fn create_username_index(client: &Client) {
let options = IndexOptions::builder().unique(true).build();
let model = IndexModel::builder()
.keys(doc! { "username": 1 })
.options(options)
.build();
client
.database(DB_NAME)
.collection::<User>(USER_COLL)
.create_index(model, None)
.await
.expect("creating an index should succeed");
}
pub(crate) async fn username_exists(user_coll: &mongodb::Collection<User>, username: &str) -> bool {
user_coll
.find_one(doc! { "username": username }, None)
.await
.ok()
.flatten()
.is_some()
}
// Add new user
#[post("/register")]
async fn register(app_state: web::Data<AppState>, user: web::Json<UserPayload>) -> impl Responder {
let mut user: UserPayload = user.into_inner();
if user.username.is_empty() || user.password.is_empty() {
return HttpResponse::BadRequest().body("Username and Password need to be set!");
}
let user_coll = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
if username_exists(&user_coll, &user.username).await {
return HttpResponse::Conflict()
.body("Username is already taken. Please pick another one!");
}
let pw = &user.password;
let salt = SaltString::generate(&mut OsRng);
let hashed_pw = Argon2::default()
.hash_password(pw.as_bytes(), &salt)
.expect("Error while hashing password!")
.to_string();
user.password = hashed_pw;
let result = user_coll
.insert_one(
User {
username: user.username,
password: Some(user.password),
},
None,
)
.await;
match result {
Ok(_) => HttpResponse::Ok().body("Registration successful!"),
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
}
}
// Remove user
#[delete("/delete")]
async fn delete_account(
app_state: web::Data<AppState>,
identity: Option<Identity>,
) -> impl Responder {
let user_coll: mongodb::Collection<User> = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
match identity {
None => HttpResponse::Unauthorized().body("You are not logged in."),
Some(id) => match id.id() {
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
Ok(username) => {
// Delete all adfs created by user
match adf_coll
.delete_many(doc! { "username": &username }, None)
.await
{
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
Ok(DeleteResult {
deleted_count: _, ..
}) => {
// Delete actual user
match user_coll
.delete_one(doc! { "username": &username }, None)
.await
{
Ok(DeleteResult {
deleted_count: 0, ..
}) => HttpResponse::InternalServerError()
.body("Account could not be deleted."),
Ok(DeleteResult {
deleted_count: 1, ..
}) => {
id.logout();
HttpResponse::Ok().body("Account deleted.")
}
Ok(_) => unreachable!(
"delete_one removes at most one entry so all cases are covered already"
),
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
}
}
}
}
},
}
}
// Login
#[post("/login")]
async fn login(
req: HttpRequest,
app_state: web::Data<AppState>,
user_data: web::Json<UserPayload>,
) -> impl Responder {
let username = &user_data.username;
let pw = &user_data.password;
if username.is_empty() || pw.is_empty() {
return HttpResponse::BadRequest().body("Username and Password need to be set!");
}
let user_coll: mongodb::Collection<User> = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
match user_coll
.find_one(doc! { "username": username }, None)
.await
{
Ok(Some(user)) => {
let stored_password = match &user.password {
None => return HttpResponse::BadRequest().body("Invalid username or password"), // NOTE: login as tremporary user is not allowed
Some(password) => password,
};
let stored_hash = PasswordHash::new(stored_password).unwrap();
let pw_valid = Argon2::default()
.verify_password(pw.as_bytes(), &stored_hash)
.is_ok();
if pw_valid {
Identity::login(&req.extensions(), username.to_string()).unwrap();
HttpResponse::Ok().body("Login successful!")
} else {
HttpResponse::BadRequest().body("Invalid email or password")
}
}
Ok(None) => HttpResponse::NotFound().body(format!(
"No user found with username {}",
&user_data.username
)),
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
}
}
#[delete("/logout")]
async fn logout(app_state: web::Data<AppState>, id: Option<Identity>) -> impl Responder {
let user_coll: mongodb::Collection<User> = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
match id {
None => HttpResponse::Unauthorized().body("You are not logged in."),
Some(id) => match id.id() {
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
Ok(username) => {
let user: User = match user_coll
.find_one(doc! { "username": &username }, None)
.await
{
Ok(Some(user)) => user,
Ok(None) => {
return HttpResponse::NotFound()
.body(format!("No user found with username {}", &username))
}
Err(err) => return HttpResponse::InternalServerError().body(err.to_string()),
};
if user.password.is_none() {
HttpResponse::BadRequest().body("You are logged in as a temporary user so we won't log you out because you will not be able to login again. If you want to be able to login again, set a password. Otherwise your session will expire automatically at a certain point.")
} else {
id.logout();
HttpResponse::Ok().body("Logout successful!")
}
}
},
}
}
// Get current user
#[get("/info")]
async fn user_info(app_state: web::Data<AppState>, identity: Option<Identity>) -> impl Responder {
let user_coll: mongodb::Collection<User> = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
match identity {
None => {
HttpResponse::Unauthorized().body("You need to login get your account information.")
}
Some(id) => match id.id() {
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
Ok(username) => {
match user_coll
.find_one(doc! { "username": &username }, None)
.await
{
Ok(Some(user)) => {
let info = UserInfo {
username: user.username,
temp: user.password.is_none(),
};
HttpResponse::Ok().json(info)
}
Ok(None) => {
id.logout();
HttpResponse::NotFound().body("Logged in user does not exist anymore.")
}
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
}
}
},
}
}
// Update current user
#[put("/update")]
async fn update_user(
req: HttpRequest,
app_state: web::Data<AppState>,
identity: Option<Identity>,
user: web::Json<UserPayload>,
) -> impl Responder {
let mut user: UserPayload = user.into_inner();
if user.username.is_empty() || user.password.is_empty() {
return HttpResponse::BadRequest().body("Username and Password need to be set!");
}
let user_coll = app_state
.mongodb_client
.database(DB_NAME)
.collection(USER_COLL);
let adf_coll: mongodb::Collection<AdfProblem> = app_state
.mongodb_client
.database(DB_NAME)
.collection(ADF_COLL);
match identity {
None => {
HttpResponse::Unauthorized().body("You need to login get your account information.")
}
Some(id) => match id.id() {
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
Ok(username) => {
if user.username != username && username_exists(&user_coll, &user.username).await {
return HttpResponse::Conflict()
.body("Username is already taken. Please pick another one!");
}
let pw = &user.password;
let salt = SaltString::generate(&mut OsRng);
let hashed_pw = Argon2::default()
.hash_password(pw.as_bytes(), &salt)
.expect("Error while hashing password!")
.to_string();
user.password = hashed_pw;
let result = user_coll
.replace_one(
doc! { "username": &username },
User {
username: user.username.clone(),
password: Some(user.password),
},
None,
)
.await;
match result {
Ok(UpdateResult {
modified_count: 0, ..
}) => HttpResponse::InternalServerError().body("Account could not be updated."),
Ok(UpdateResult {
modified_count: 1, ..
}) => {
// re-login with new username
Identity::login(&req.extensions(), user.username.clone()).unwrap();
// update all adf problems of user
match adf_coll
.update_many(
doc! { "username": &username },
doc! { "$set": { "username": &user.username } },
None,
)
.await
{
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
Ok(UpdateResult {
modified_count: _, ..
}) => HttpResponse::Ok().json(UserInfo {
username: user.username,
temp: false,
}),
}
}
Ok(_) => unreachable!(
"replace_one replaces at most one entry so all cases are covered already"
),
Err(err) => HttpResponse::InternalServerError().body(err.to_string()),
}
}
},
}
}