Well, while the back-end side of this platform is built upon a traditional technology (specifically
Apache/
PHP/
MySQL) the front-end side is built on
Node.js, eventually
Nginx, and a client-side Javascript framework based on
Vue.js like
Quasar framework. The latter side is fueled with data retrieved from Mediawiki (a forked version with some extended features, mainly an enhanced support for a non-flat namespace that is with sub-pages and sub-folders) through a Node.js script triggered every time that a page or file is created, updated, moved and deleted, and autonomously checking the mediawiki database with a specific frequency for any other indirect change (for instance when pages are imported through the mediawiki import page, or after the execution of some maintenance script). By this way, we can ensure a real time synchronization between the two sides on direct page's editing, and an accurate synchronization "within minutes" when the wiki site is updated in some indirect way. During this process the Node.js script performs some convoluted operation structuring data in such a way that they can be retrieved or even "consumed" by a front-end interface in a way that would not be possible without some intermediate operation and a further elaboration of data (for instance such infrastructure allows offline navigation through consistent set of pages). In short the front-end application (which of course is built
ad hoc for this purpose, and is subjected to a development process completely decoupled by that of Mediawiki, even if the inter-dependency of the two is taken into account) retrieves its data through a
dedicated API from a specific Node.js back-end, and these data are continuously updated and structured following the changes performed on the Mediawiki back end.