90 KiB
90 KiB
任务目标
一句话:记录当前仓库、线上环境、最近实现和开发注意事项,方便后续继续协作与接手。
当前状态
- 已完成:
- 项目主线已经从旧教务模块切换为“网盘 + 快传 + 管理台”结构
- 快传模块已整合进主站,支持取件码、分享链接、P2P 传输、部分文件接收、ZIP 下载、存入网盘
- 网盘已支持上传、下载、重命名、删除、移动、复制、公开分享、接收快传后存入
- 注册改成邀请码机制,邀请码单次使用后自动刷新,并在管理台展示与复制
- 同账号现已允许桌面端与移动端同时在线,但同一端类型仍只保留一个有效会话;同端再次登录会在下一次受保护请求时挤掉旧会话
- 后端已补生产 CORS,默认放行
https://yoyuzh.xyz与https://www.yoyuzh.xyz - 线上文件存储与前端静态托管已迁到多吉云对象存储,后端通过临时密钥 API 获取短期 S3 会话访问底层 COS 兼容桶
- 管理台 dashboard 已显示总存储量、下载流量、今日请求次数、快传使用量、离线快传占用和请求折线图,并支持调整离线快传总上限
- 管理台用户列表已显示每个用户的已用空间 / 配额,表格也已收紧
- 游戏页已接入
/race/、/t_race/,带站内播放器、退出按钮和友情链接 - 2026-04-02 已统一密码策略为“至少 8 位且包含大写字母”,并补测试确认管理员改密后旧密码失效、新密码生效
- 2026-04-02 已放开未登录直达快传:登录页可直接进入快传,匿名用户可发在线快传;2026-04-03 又放开了离线接收,因此匿名用户现在可发在线快传、接收在线快传、接收离线快传,但发离线和把离线文件存入网盘仍要求登录
- 2026-04-02 快传发送页已新增“我的离线快传”区域:登录用户可查看自己未过期的离线快传记录,并点开弹层重新查看取件码、二维码和分享链接
- 2026-04-02 已将“我的离线快传”后端接口正式部署到生产,
/api/transfer/sessions/offline/mine在线可用,未登录访问会返回401 - 2026-04-02 前端主入口已按屏幕宽度自动切换桌面壳与移动壳,宽度小于 768px 时渲染
MobileApp - 2026-04-02 移动端
MobileFiles与MobileTransfer已发布与桌面一致的动态光晕背景,不再使用纯黑静态底色 - 2026-04-02 网盘存储模型已改为“
StoredFile逻辑元数据 +FileBlob物理对象引用”;新上传写入全局blobs/...key,分享导入与网盘复制都会直接复用同一个 blob,不再复制物理文件 - 2026-04-02 后端启动时会自动把旧
portal_file.storage_name数据回填到新的blob_id引用;管理台totalStorageBytes现已按FileBlob汇总真实物理占用,而不是按逻辑文件行数重复累加 - 2026-04-02 18:43 CST 已将共享 blob 改造后的后端 jar 部署到生产;
my-site-api.service重启成功,https://api.yoyuzh.xyz/swagger-ui.html仍可访问 - 2026-04-02 19:08 CST 已将“上传落库失败时自动删除已写入 blob”修复部署到生产;当前普通上传、直传完成、外部导入在元数据保存失败时都会回滚底层
blobs/...对象,避免再产生孤儿 blob - 2026-04-02 管理台 summary 已新增“最近 7 天上线记录”:JWT 鉴权成功后会按天去重记录上线用户,保留 7 天并返回每天人数与用户名单
- 2026-04-02 管理台“今日请求折线图”已改为只展示当天已过去的小时;例如当天只到 07 点时,曲线只会覆盖 00:00-07:00,点位也缩成小圆点
- 2026-04-03 已在
front/接入 Capacitor,生成front/androidAndroid 工程并成功产出调试 APK:front/android/app/build/outputs/apk/debug/app-debug.apk - 2026-04-03 快传前端已支持通过
VITE_TRANSFER_ICE_SERVERS_JSON追加自定义 ICE / TURN 服务器;当前默认仍只有 STUN,因此跨运营商或手机蜂窝网络的在线 P2P 传输仍依赖后续补 TURN 才能稳定 - 2026-04-03 Android 打包已确认走“Vite 产物 ->
npx cap sync android-> GradleassembleDebug”链路;当前应用包名为xyz.yoyuzh.portal - 2026-04-03 Android WebView 壳内的前端 API 基址已改成运行时判断:Web 站点继续走相对
/api,Capacitorlocalhost壳在http://localhost与https://localhost下都会默认直连https://api.yoyuzh.xyz/api,避免 APK 把请求误打到应用内本地地址;后端 CORS 也同步放行了https://localhost - 2026-04-03 由于这台机器直连
dl.google.com/ Android Maven 仓库会 TLS 握手失败,Android 构建已改走阿里云 Google Maven 镜像,并通过redirector.gvt1.com手动落本机 SDK 包 - 2026-04-03 总览页已新增 Android APK 下载入口;当前 Web 总览已改走后端公开下载口
https://api.yoyuzh.xyz/api/app/android/download,不再直接指向前端静态桶 - 2026-04-03 鉴权链路已按客户端类型拆分会话:前端请求会带
X-Yoyuzh-Client,后端分别维护桌面和移动的活跃sid与 refresh token 集合,因此桌面 Web 与移动端 APK 可同时登录;移动端总览页在 Capacitor 原生壳内会显示“检查更新”,通过探测 OSS 上 APK 最新修改时间并直接跳转下载链接完成更新 - 2026-04-03 前端 OSS 发布脚本现已收口为“只发布
front/dist静态站”,不再上传 APK - 2026-04-03 已新增仓库根脚本
node scripts/deploy-android-release.mjs,只负责把 APK 与android/releases/latest.json上传到 Android 独立对象路径;node scripts/deploy-android-apk.mjs会在前端静态站发布后自动调用它 - 2026-04-03 Android 更新链路已改为“APK 存在文件桶独立路径
android/releases/,后端/api/app/android/latest读取android/releases/latest.json返回带版本号的后端下载地址,/api/app/android/download直接分发 APK 字节流”;这样 App 内检查更新和 Web 下载都不会再误用前端静态桶旧包,也不依赖对象存储预签名下载 - 2026-04-03 网盘已新增回收站:
DELETE /api/files/{id}现在会把文件或整个目录树软删除进回收站,默认保留 10 天;前端桌面网盘页在左侧目录栏最下方新增“回收站”入口,移动端网盘页头也可进入回收站查看并恢复 - 2026-04-05 Git 远程已从 GitHub 迁到自建私有 Gitea:
https://git.yoyuzh.xyz/yoyuz/my_site.git;当前本地main已推到新的origin/main - 2026-04-06 已把本地项目密钥和部署元信息统一收口到仓库根目录
.env,模板文件改为.env.example;前端 / Android 发布脚本现在优先读取.env,旧.env.oss.local只作为兼容回退,不再作为主入口 - 2026-04-06 已删除根目录
账号密码.txt,服务器 SSH 登录信息改为放在根目录.env - 2026-04-06 已把补充型 handoff 文档收口到
docs/agents/handoff.md;CLAUDE.md继续保留在根目录作为 agent 入口,额外的NEXT_CODEX_HANDOFF.md与目录说明文档已删除 - 2026-04-06 已确认前端当前只是在源码层使用
front/src/components/ui/*组件,不依赖根目录shadcnCLI;因此已删除根目录package.json、package-lock.json、components.json和根目录node_modules - 根目录 README 已重写为中文公开版 GitHub 风格
- VS Code 工作区已补
.vscode/settings.json、.vscode/extensions.json、lombok.config,并在backend/pom.xml显式声明了 Lombok annotation processor
- 进行中:
- 继续观察 VS Code Java/Lombok 误报是否完全消失
- 后续如果再做 README/开源化展示,可以继续补 banner、截图和架构图
- 待开始:
- 如果用户继续提需求,优先沿当前网站主线迭代,不再回到旧教务方向
已做决策
| 决策 | 理由 | 排除的方案及原因 |
|---|---|---|
| 用快传模块替换旧教务模块 | 当前产品方向已经转向文件流转和个人站点工具集合 | 继续保留教务逻辑: 已不符合当前站点定位,维护成本高 |
| 快传采用“后端信令 + 浏览器 P2P 传输” | 文件内容不走自有服务器带宽,体验更接近局域/点对点传输 | 走服务器中转: 会增加服务器流量和实现复杂度 |
网盘文件改成“共享 blob + StoredFile 引用” |
分享导入、网盘复制、重命名、移动都不应再触发物理对象复制,删除时也需要按最后引用回收真实对象 | 继续把物理 key 绑定 userId/path/storageName: 会导致转存和复制永远写出第二份对象,浪费存储 |
快传接收页收口回原 /transfer 页面 |
用户不需要单独进入专门的接收页面,入口更统一 | 独立接收页: 路径分散、用户心智更差 |
| 网盘侧边栏改成单一树状目录结构 | 更像真实网盘,层级关系清晰 | 保留“快速访问 + 目录”双区块: 结构割裂 |
| 注册邀请码改成单次使用后自动刷新 | 更适合私域邀请式注册,管理台也能直接查看当前邀请码 | 固定邀请码: 容易扩散且不可控 |
| 登录态通过“按客户端类型拆分的会话 ID + JWT sid/client claim”实现 | 桌面 Web 和移动 APK 可以同时在线,但同一端再次登录仍会立即挤掉旧 access token,而不仅仅是旧 refresh token | 只保留全局单会话: 会让桌面/移动互相顶下线;只撤销 refresh token: 旧 access token 仍会继续有效一段时间 |
前端发布继续使用 node scripts/deploy-front-oss.mjs |
仓库已有正式静态站发布脚本,现已切到多吉云临时密钥 + S3 兼容上传流程 | 手动上传对象存储: 容易出错,也不利于复用 |
| 后端发布继续采用“本地打包 + SSH/ SCP 上传 jar + systemd 重启” | 当前线上就按这个方式运行 | 自创部署脚本: 仓库里没有现成正式脚本,容易和现网偏离 |
主站 CORS 默认放行 https://yoyuzh.xyz 与 https://www.yoyuzh.xyz |
前端生产环境托管在独立静态站域名下,必须允许主站跨域调用后端 API | 仅保留 localhost: 会导致生产站调用 API 时被浏览器拦截 |
| 文件存储切到多吉云对象存储并使用临时密钥 | 后端、前端发布和迁移脚本都可统一走 S3 兼容协议,同时减少长期静态密钥暴露 | 继续使用阿里云 OSS 固定密钥: 已不符合当前多吉云接入方式 |
| 密码策略放宽到“至少 8 位且包含大写字母” | 降低注册和管理员改密阻力,同时保留最基础的复杂度门槛 | 继续要求大小写 + 数字 + 特殊字符: 对当前站点用户而言过重,且已导致后台改密体验不一致 |
| 匿名用户仅开放在线快传,不开放离线快传 | 允许登录页直接进入快传,同时避免匿名用户占用站点持久存储 | 匿名也开放离线快传: 会增加滥用风险和存储成本 |
| 已登录用户可以在快传页回看自己的离线快传记录 | 离线快传有效期长达 7 天,用户需要在不重新上传的情况下再次查看取件码和分享链接 | 只在刚创建成功时展示一次取件信息: 用户丢失取件码后无法自助找回 |
| 前端主入口按宽度自动切换到移动壳 | 不需要单独维护 /m 路由,用户在小屏设备上直接进入移动端布局 |
独立 /m 路由: 需要额外记忆入口且与主站状态分叉 |
| 管理台上线记录按“JWT 鉴权成功的每日去重用户”统计,并只保留 7 天 | 后台需要回答“每天多少人上线、具体是谁”,同时不必引入更重的行为埋点系统 | 只统计登录接口: 无法覆盖 refresh 之后的真实活跃访问;无限保留历史: 超出当前管理需求 |
| Android 客户端先采用 Capacitor 包裹现有前端站点 | 现有 React/Vite 页面、鉴权和 API 调用可以直接复用,成本最低 | 重新单写原生 Android WebView 壳: 会引入额外原生维护面;改成 React Native / Flutter: 超出当前需求 |
APK 发布通过前端 OSS 脚本额外上传稳定对象 key,而不是进入 front/dist |
既能让总览页长期使用固定下载地址,也能避免 npx cap sync android 把旧 APK 再次塞进新的 APK 资产里 |
把 APK 直接放进 front/public 或 front/dist: 会污染前端静态产物,并可能导致 Android 包体递归膨胀 |
| 网盘删除采用“回收站软删除 + 10 天过期清理” | 用户删错文件后需要可恢复,同时共享 blob 仍要等最后引用真正过期后才删除底层对象 | 继续立即物理删除: 不可恢复且误删成本高;额外建独立归档表: 当前需求下实现过重 |
待解决问题
- VS Code 若仍报
final 字段未在构造器初始化之类错误,优先判断为 Lombok / Java Language Server 误报,而不是源码真实错误 front/README.md仍是旧模板风格说明,当前真实入口说明以根目录README.md为准,后续可继续整理- 前端构建仍有 chunk size warning,目前不阻塞发布,但后续可以考虑做更细的拆包
- 线上前端 bundle 当前仍内嵌
https://api.yoyuzh.xyz/api,API 子域名异常时会直接表现为“网络异常/登录失败” - 当前 Android 工程里的 Google Maven 镜像改动有一部分落在生成/依赖文件中;如果后续升级 Capacitor 或重新
npm install,需要重新确认front/android/build.gradle、front/android/capacitor-cordova-android-plugins/build.gradle、front/node_modules/@capacitor/android/capacitor/build.gradle的仓库源仍指向可访问镜像 - 根目录目前仍有
开发测试账号.md、需求文档.md、模板/等非运行时资料,后续如需继续瘦身可再决定是否迁入docs/或单独资料目录
关键约束
(只写这个任务特有的限制,区别于项目通用规则)
- 仓库根目录没有
package.json,不要在根目录执行npm命令 - 前端真实命令以
front/package.json为准;npm run lint实际是tsc --noEmit - 后端真实命令以
backend/pom.xml/backend/README.md为准;常用的是mvn test和mvn package - 修改文件时默认用
apply_patch - 根目录
.env现在是本地密钥、部署参数和服务器 SSH 元信息的统一入口;.env.example是模板,.env.oss.local不再作为主入口 - 已知线上后端服务名是
my-site-api.service - 已知线上后端运行包路径是
/opt/yoyuzh/yoyuzh-portal-backend.jar - 已知新服务器公网 IP 是
1.14.49.201 - 已知线上后端额外配置文件是
/opt/yoyuzh/application-prod.yml,环境变量文件是/opt/yoyuzh/app.env - 2026-04-01 已将线上文件桶与前端桶切到多吉云对象存储,后端配置走多吉云临时密钥 API
- 2026-04-02 部署验证:
http://yoyuzh.xyz/返回 200,https://yoyuzh.xyz/返回 200,https://api.yoyuzh.xyz/swagger-ui.html最终返回 200,前端资源https://yoyuzh.xyz/assets/AdminApp-C9j3tmPO.js返回 200 - 2026-04-02 后端服务重启后为 active,启动时间为
2026-04-02 12:14:25 CST - 2026-04-02 再次部署后端,
my-site-api.service启动时间更新为2026-04-02 17:26:16 CST,生产接口/api/transfer/sessions/offline/mine返回已恢复正常 - 2026-04-02 再次发布前端,移动端背景修复对应资源为
index-DdEYkdGD.js、index-qIc3rBab.css、AdminApp-DFQ6SlBP.js - 2026-04-02 共享 blob 上线前检查:生产库普通文件里
storage_name为空的脏数据数量为 0,总普通文件数为 55 - 2026-04-02 新 blob 模型依赖应用启动时的
FileBlobBackfillService把旧storage_name行回填到blob_id;如线上表里存在缺少storage_name且blob_id为空的历史脏数据,启动会直接失败并暴露该文件 ID - 2026-04-02 共享 blob 上线后校验:
portal_file.blob_id列已存在,普通文件blob_id IS NULL数量为 0,portal_file_blob当前共有 54 条记录 - 2026-04-02 18:45 CST 线上上传报
Column 'storage_name' cannot be null,已定位为旧表结构未把portal_file.storage_name放宽为可空;已在线执行ALTER TABLE portal_file MODIFY storage_name varchar(255) NULL修复 - 2026-04-02 19:08 CST 再次发布后端,
my-site-api.service启动时间更新为2026-04-02 19:08:14 CST,https://api.yoyuzh.xyz/swagger-ui.html再次确认返回200 - 2026-04-04 私有
apk/ipa下载链路已改为“后端鉴权后返回短时https://api.yoyuzh.xyz/_dl/...链接,Nginxsecure_link校验通过后再代理到dl.yoyuzh.xyz对象域名”;这样安装包不再走默认*.myqcloud.com域名,也不再暴露长期可用的公开dl直链 - 2026-04-04 12:48 CST 已将私有
apk/ipa的/_dl短时签名修复重新部署到生产;my-site-api.service重启成功,https://api.yoyuzh.xyz/swagger-ui/index.html返回200,带签名的https://api.yoyuzh.xyz/_dl/...实测返回200 OK - 2026-04-05 Git 远程
origin已改为私有 Gitea 仓库https://git.yoyuzh.xyz/yoyuz/my_site.git,默认分支main已建立对origin/main的跟踪 - 2026-04-05 仓库当前不再把密码文件、本地环境变量文件和前端生产环境文件视为必须忽略项;提交前要主动区分“想入库的私有配置”与“仍应保留本地的临时产物”
- Android 本机构建当前默认 SDK 根目录为
/Users/mac/Library/Android/sdk - Android 本地打包命令链:
cd front && npm run buildcd front && npx cap sync androidcd front/android && ./gradlew assembleDebug
- Android 一键发包命令:
node scripts/deploy-android-apk.mjs
- Android 调试 APK 当前输出路径:
front/android/app/build/outputs/apk/debug/app-debug.apk - Android APK 独立发包命令:
node scripts/deploy-android-release.mjs
- 服务器登录信息保存在根目录
.env,不要把内容写进文档或对外输出
参考资料
(相关链接、文档片段、背景资料)
- 根目录说明:
README.md - 后端说明:
backend/README.md - 仓库协作规范:
AGENTS.md - agent / handoff 补充文档:
docs/agents/handoff.md - 前端/后端工作区配置:
.vscode/settings.json、.vscode/extensions.json - Lombok 配置:
lombok.config - 最近关键实现位置:
- 分端会话登录:
backend/src/main/java/com/yoyuzh/auth/AuthService.java - JWT 会话校验:
backend/src/main/java/com/yoyuzh/auth/JwtTokenProvider.java - JWT 过滤器:
backend/src/main/java/com/yoyuzh/config/JwtAuthenticationFilter.java - CORS 配置:
backend/src/main/java/com/yoyuzh/config/CorsProperties.java、backend/src/main/resources/application.yml - 密码策略:
backend/src/main/java/com/yoyuzh/auth/PasswordPolicy.java - 网盘树状目录:
front/src/pages/Files.tsx、front/src/pages/files-tree.ts - 快传接收页:
front/src/pages/TransferReceive.tsx - 未登录快传权限:
backend/src/main/java/com/yoyuzh/transfer/TransferController.java、backend/src/main/java/com/yoyuzh/transfer/TransferService.java - 离线快传历史与详情弹层:
front/src/pages/Transfer.tsx、front/src/pages/transfer-state.ts - 移动端入口切换:
front/src/main.tsx、front/src/MobileApp.tsx、front/src/lib/app-shell.ts - 管理员改密接口:
backend/src/main/java/com/yoyuzh/admin/AdminService.java - 管理台统计与 7 天上线记录:
backend/src/main/java/com/yoyuzh/admin/AdminMetricsService.java、backend/src/main/java/com/yoyuzh/admin/AdminDailyActiveUserEntity.java、backend/src/main/java/com/yoyuzh/config/JwtAuthenticationFilter.java - 管理台 dashboard 展示与请求折线图:
front/src/admin/dashboard.tsx、front/src/admin/dashboard-state.ts - 网盘 blob 模型与回填:
backend/src/main/java/com/yoyuzh/files/core/FileService.java、backend/src/main/java/com/yoyuzh/files/core/FileBlob.java、backend/src/main/java/com/yoyuzh/files/core/FileBlobBackfillService.java - 网盘回收站与恢复:
backend/src/main/java/com/yoyuzh/files/core/FileService.java、backend/src/main/java/com/yoyuzh/files/core/FileController.java、backend/src/main/java/com/yoyuzh/files/core/StoredFile.java、front/src/pages/RecycleBin.tsx、front/src/pages/recycle-bin-state.ts - 前端生产 API 基址:
front/.env.production - Capacitor Android 入口与配置:
front/capacitor.config.ts、front/android/
- 分端会话登录:
2026-04-08 阶段 1 升级记录
- 已按 Cloudreve 对照升级工程书落地第一阶段最小骨架:后端新增
/api/v2/site/ping、ApiV2Response、ApiV2ErrorCode、ApiV2Exception与 v2 专用异常处理器,旧/api/**响应模型暂不替换。 - 前端
front/src/lib/api.ts新增X-Yoyuzh-Client-Id约定和apiV2Request(),内部 API 请求会携带稳定 client id;外部签名上传 URL 不携带该头。 - 修正
.gitignore中storage/误忽略任意层级storage包的问题,改为只忽略仓库根/storage/和本地运行数据/backend/storage/,否则backend/src/main/java/com/yoyuzh/files/storage/*会被误隐藏。
2026-04-08 阶段 2 第一小步记录
- 已新增文件实体模型二期的兼容表模型:
FileEntity、StoredFileEntity、FileEntityType,并在StoredFile上新增primaryEntity与updatedAt。 - 已新增
FileEntityBackfillService,启动后在旧FileBlob仍保留的前提下,把已有StoredFile.blob只增量映射到FileEntity.VERSION与StoredFile.primaryEntity;现有下载、复制、移动、分享、回收站读写路径暂不切换。 - 当时阶段未删除
FileBlob,未切换前端,也还未引入上传会话二期。
2026-04-08 阶段 2 第二小步记录
- 文件写入路径开始双写
FileBlob + FileEntity.VERSION:普通代理上传、直传完成、外部文件导入、分享导入,以及网盘复制复用 blob 时,都会给新StoredFile写入primaryEntity并创建StoredFileEntity(PRIMARY)关系。 - 当前仍不切换读取路径:下载、ZIP、分享详情、回收站等旧业务继续依赖
StoredFile.blob,primaryEntity只作为后续版本、缩略图、转码、存储策略迁移的兼容数据。 - 为避免新关系表阻塞现有删除和测试清理,
StoredFileEntity -> StoredFile使用数据库级删除级联;FileEntity.createdBy删除用户时置空,保留物理实体审计数据但不阻塞用户清理。 - 2026-04-08 阶段 3 第一小步:新增后端上传会话二期最小骨架,包含
UploadSession、UploadSessionStatus、UploadSessionRepository、UploadSessionService,以及受保护的/api/v2/files/upload-sessions创建、查询、取消接口;旧/api/files/upload/**上传链路暂不切换,前端上传队列暂不改动。 - 2026-04-08 阶段 3 第二小步:新增
POST /api/v2/files/upload-sessions/{sessionId}/complete,v2 上传会话可从CREATED进入COMPLETING并复用旧FileService.completeUpload()完成FileBlob + StoredFile + FileEntity.VERSION落库,成功后标记COMPLETED;取消、失败、过期会话不能完成。实际分片内容上传和前端上传队列仍未切换。 - 2026-04-08 阶段 3 第三小步:新增
PUT /api/v2/files/upload-sessions/{sessionId}/parts/{partIndex},用于记录当前用户上传会话的 part 元数据到uploadedPartsJson,并把会话状态从CREATED推进到UPLOADING;该接口只记录etag/size等状态,不承担真正的对象存储分片内容写入或合并。 - 2026-04-08 阶段 3 第四小步:
UploadSessionService新增定时过期清理,按小时扫描CREATED/UPLOADING/COMPLETING且已过期的会话,尝试删除对应临时blobs/...对象,并把会话标记为EXPIRED;COMPLETED/CANCELLED/FAILED/EXPIRED不在本轮清理范围内。 - 2026-04-08 multipart 评估结论:暂不把 v2 上传会话直接接入真实对象存储分片写入/合并。当前
FileContentStorage仍是单对象上传/校验抽象,缺少 multipart uploadId、part URL 预签名、complete/abort 语义;立即接入会把上传会话写死在当前多吉云 S3 配置上,并让过期清理误以为deleteBlob能释放未完成分片。下一步先做阶段 4 存储策略与能力声明骨架,再按multipartUpload能力接 S3 multipart。 - 2026-04-08 阶段 4 第一小步:新增
StoragePolicy、StoragePolicyType、StoragePolicyCredentialMode、StoragePolicyCapabilities与StoragePolicyService,启动时把当前app.storage.provider映射成一条默认策略;当时本地策略声明serverProxyDownload=true、multipartUpload=false,多吉云/S3 兼容策略也先声明为directUpload=true、signedDownloadUrl=true、requiresCors=true、multipartUpload=false。新 v2 上传会话会记录默认storagePolicyId,但旧上传下载路径和前端上传队列仍未切换。 - 2026-04-08 合并
files/storage补提交后修复:S3FileContentStorage改为复用DogeCloudS3SessionProvider/DogeCloudTmpTokenClient获取并缓存运行期S3Client与S3Presigner,保留生产构造器S3FileContentStorage(FileStorageProperties),同时提供测试用注入构造器;S3 直传、签名下载、上传校验、读旧对象键 fallback、rename/move/copy、离线快传对象读写继续通过FileContentStorage统一抽象。 - 2026-04-08 阶段 4 第二小步:新写入和回填生成的
FileEntity.VERSION会记录默认StoragePolicy.id到storagePolicyId,让物理实体可以追踪归属存储策略;复用已有FileEntity时只增加引用计数,不覆盖历史实体策略字段。旧/api/files/**读取路径仍继续依赖StoredFile.blob。 - 2026-04-08 阶段 4 第三小步:新增管理员只读存储策略查看能力,后端暴露
GET /api/admin/storage-policies,前端管理台新增“存储策略”资源列表和能力矩阵展示;该接口只返回白名单 DTO 与结构化StoragePolicyCapabilities,不暴露凭证,也不支持新增/编辑/启停/删除策略。 - 2026-04-08 阶段 5 第一小步:新增用户侧 v2 文件搜索最小闭环,后端暴露受保护的
GET /api/v2/files/search,复用StoredFile查询当前用户未删除文件,支持name、type=file|directory|folder|all、sizeGte/sizeLte、createdGte/createdLte、updatedGte/updatedLte与分页;同时新增FileMetadata/FileMetadataRepository扩展表骨架,暂不迁移回收站字段、暂不接入标签/metadata 过滤、暂不改前端上传队列和旧/api/files/**行为。 - 2026-04-08 阶段 5 第二小步:前端桌面端接入最小搜索下游,新增
front/src/lib/file-search.ts和front/src/lib/file-search.test.ts,桌面front/src/pages/Files.tsx可通过 v2 search 单独搜索并展示结果,不写入getFilesListCacheKey(...),也不影响原有目录缓存和上传主链路;移动端暂未接入搜索,后续可按同一 helper 补入。 - 2026-04-08 阶段 5 第三小步:新增分享二期后端最小骨架。
FileShareLink增加passwordHash、expiresAt、maxDownloads、downloadCount、viewCount、allowImport、allowDownload、shareName;新增com.yoyuzh.api.v2.shares与ShareV2Service,提供 v2 创建、公开读取、密码校验、导入、我的分享列表和删除。公开访问包括GET /api/v2/shares/{token}、POST /api/v2/shares/{token}/verify-password,以及GET /api/v2/shares/{token}?download=1下载入口;后者会统一校验过期时间、密码、allowDownload和maxDownloads,成功后复用现有下载链路并递增downloadCount。创建、导入、我的分享、删除仍需登录;v2 导入仍会先校验过期时间、密码、allowImport和maxDownloads,再复用旧导入持久化链路;旧/api/files/share-links/**继续兼容。 - 2026-04-08 阶段 5 第四小步:新增文件事件流前后端最小闭环。后端落地
FileEvent/FileEventType/FileEventRepository/FileEventService,并提供受保护的GET /api/v2/files/events?path=/SSE 入口;当前可按用户广播、按路径前缀过滤、按X-Yoyuzh-Client-Id抑制自身事件,首次连接会收到READY事件。前端新增 fetch-stream 版front/src/lib/file-events.ts,不直接使用无法带鉴权头的原生EventSource;桌面Files与移动MobileFiles已订阅当前目录事件,收到文件变更后失效当前目录缓存并刷新列表,搜索结果状态不被清空。 - 2026-04-09 阶段 5 第五小步:上传会话二期后端接入真实 multipart。
FileContentStorage新增createMultipartUpload/prepareMultipartPartUpload/completeMultipartUpload/abortMultipartUpload抽象,S3FileContentStorage用预签名UploadPart和Complete/AbortMultipartUpload落地实现;默认 S3 存储策略能力改为multipartUpload=true。UploadSession新增multipartUploadId,创建会话时若默认策略支持 multipart 会立即初始化 uploadId;v2 会话响应新增multipartUpload,并开放GET /api/v2/files/upload-sessions/{sessionId}/parts/{partIndex}/prepare返回单分片直传地址。完成会话时会先按已记录 part 元数据提交 multipart complete,再复用旧FileService.completeUpload()落库;过期清理也会对未完成 multipart 执行 abort。前端上传队列仍未切到这条新链路。 - 2026-04-08 阶段 6 第一步:新增后台任务框架与 worker 最小骨架。后端新增
BackgroundTask/BackgroundTaskType/BackgroundTaskStatus/BackgroundTaskRepository/BackgroundTaskService,并暴露受保护的GET /api/v2/tasks、GET /api/v2/tasks/{id}、DELETE /api/v2/tasks/{id}以及POST /api/v2/tasks/archive、POST /api/v2/tasks/extract、POST /api/v2/tasks/media-metadata创建接口;任务创建入口会校验fileId属于当前用户、未删除、请求path匹配服务端派生逻辑路径,并按任务类型限制目录、zip-compatible 解压源和媒体文件,任务 state 使用服务端文件信息。 - 2026-04-09 阶段 6 第二步:
MEDIA_META之外的后台任务开始真实化。ARCHIVE任务现在会派生outputPath/outputFilename,由ArchiveBackgroundTaskHandler复用FileService.buildArchiveBytes(...)把目录或单文件打成 zip,并通过importExternalFile(...)写回同级目录;EXTRACT任务现在会派生outputPath/outputDirectoryName,由ExtractBackgroundTaskHandler读取 zip-compatible 归档、剥离共享根目录、支持单文件归档直接恢复到父目录,并通过FileService.importExternalFilesAtomically(...)在预检冲突后批量落库,失败时清理已写入的blobs/...,避免留下孤儿 blob。worker 仍按QUEUED -> RUNNING -> COMPLETED/FAILED驱动,当前未实现非 zip 解压格式、缩略图/视频时长,以及 archive/extract 的前端入口。 - 2026-04-09 阶段 6 第三步:后台任务新增最小 progress 字段,但仍不做假百分比。
BackgroundTaskService现在会在publicStateJson里统一维护phase:创建时为queued,claim 后为running,worker 开始执行时按任务类型细化成archiving/extracting/extracting-metadata,完成/失败/取消时分别收口为completed/failed/cancelled。GET /api/v2/tasks/**会直接透出这些阶段;BackgroundTaskV2ControllerIntegrationTest也已覆盖 archive/extract 完成态、extract 失败态和取消态的 phase 回读。 - 2026-04-09 阶段 6 第六步:
ARCHIVE/EXTRACT后台任务补了真实条目计数进度。worker 现在会把 progress reporter 传入 handler;ARCHIVE会按实际写入 zip entry 推进processedFileCount/totalFileCount与processedDirectoryCount/totalDirectoryCount,EXTRACT会按实际创建目录和导入文件推进同一组字段。重试和启动恢复仍按privateStateJson重建公开 state,因此这些运行期计数字段不会被错误保留到下一次执行。 - 2026-04-09 阶段 6 第四步:后台任务补了最小手动重试闭环。后端新增
POST /api/v2/tasks/{id}/retry,只允许当前用户把自己FAILED状态的任务重新置回QUEUED;重试时会清空finishedAt/errorMessage,按privateStateJson重建公开 state,并把publicStateJson.phase重置为queued,不会保留失败时写入的worker等瞬时字段。 - 2026-04-09 阶段 6 第五步:后台任务补了服务启动时的
RUNNING恢复。最初版本会在ApplicationReadyEvent后直接把遗留RUNNING任务重排回QUEUED;2026-04-09 晚些时候又升级为只回收 lease 已过期或旧数据里缺少 lease 的RUNNING任务,避免多实例场景误抢活跃 worker。 - 2026-04-09 阶段 6 第七步:后台任务补了保守的自动重试/退避骨架。
BackgroundTask现在有attemptCount/maxAttempts/nextRunAt;最初ARCHIVE、EXTRACT、MEDIA_META都默认最多执行 3 次,worker claim 时会递增attemptCount。同日后续又升级为按任务类型区分预算与退避:ARCHIVE最多 4 次、EXTRACT最多 3 次、MEDIA_META最多 2 次;失败分类从布尔可重试升级为UNSUPPORTED_INPUT/DATA_STATE/TRANSIENT_INFRASTRUCTURE/RATE_LIMITED/UNKNOWN,公开 state 会写入failureCategory与retryDelaySeconds,并按类别和任务类型决定是否自动回队列及退避时长。 - 2026-04-09 阶段 6 第八步:后台任务补了运行期 heartbeat 与多实例 lease。
BackgroundTask现在持久化leaseOwner/leaseExpiresAt/heartbeatAt;worker 每次 claim 会写入唯一workerOwner并续租,运行中 progress/完成/失败都会刷新 heartbeat。ARCHIVE/EXTRACT的公开 state 现已附带真实progressPercent,MEDIA_META会暴露metadataStage;多实例下会先回收 lease 过期的RUNNING任务,再领取QUEUED任务,旧 worker 若丢失 owner 则不会再覆盖新状态。 - 2026-04-09 桌面端
Files已补最近 10 条后台任务面板,支持查看状态、取消QUEUED/RUNNING任务,并可为当前选中文件创建媒体信息提取任务;移动端和 archive/extract 的前端入口暂未接入。 - 2026-04-09 files 后端结构清理:
backend/src/main/java/com/yoyuzh/files不再平铺大部分领域类,现已按职责重组为core/upload/share/search/events/tasks/storage/policy八个子包;类名、接口路径、数据库表名/字段名和现有测试语义保持不变,主要是通过 package 重组、import 修正和测试路径同步降低后续继续演进 upload/share/search/events/tasks/storage-policy 的维护摩擦。 - 2026-04-09 存储策略管理后端继续收口:管理员接口已从只读
GET /api/admin/storage-policies扩展到POST /api/admin/storage-policies、PUT /api/admin/storage-policies/{policyId}、PATCH /api/admin/storage-policies/{policyId}/status和POST /api/admin/storage-policies/migrations。当前支持新增、编辑、启停非默认策略,并可创建STORAGE_POLICY_MIGRATION后台任务;默认策略不能停用,仍不支持删除策略或切换默认策略。 - 2026-04-09 存储策略与上传路径后端继续推进:
STORAGE_POLICY_MIGRATION现已从 skeleton 升级为“当前活动存储后端内的真实迁移”。worker 会限制源/目标策略必须同类型,读取旧FileBlob对象字节,写入新的policies/{targetPolicyId}/blobs/...object key,同步更新FileBlob.objectKey与FileEntity.VERSION(objectKey, storagePolicyId),并在事务提交后异步清理旧对象;若处理中失败,会删除本轮新写对象并依赖事务回滚元数据。与此同时,v2 upload session 现在会按默认策略能力决策uploadMode=PROXY|DIRECT_SINGLE|DIRECT_MULTIPART:directUpload=false时走POST /api/v2/files/upload-sessions/{sessionId}/content代理上传,directUpload=true && multipartUpload=false时走GET /api/v2/files/upload-sessions/{sessionId}/prepare单请求直传,multipartUpload=true时继续走现有分片 prepare/record/complete 链路;会话响应还会附带strategy,把当前模式下的后续后端入口模板显式返回给前端;旧/api/files/upload/initiate也会尊重默认策略的directUpload/maxObjectSize。 - 2026-04-09 前端 files 上传链路已切到 v2 upload session:桌面端
FilesPage、移动端MobileFilesPage和saveFileToNetdisk()现在统一通过front/src/lib/upload-session.ts走create/get/cancel/prepare/content/part-prepare/part-record/complete全套 helper,并按后端返回的uploadMode + strategy自动选择PROXY / DIRECT_SINGLE / DIRECT_MULTIPART。旧/api/files/upload/**当前仍保留给头像等非 files 子系统入口使用。 - 2026-04-10 存储策略与上传路径后端进入正式迁移,并完成前端视觉系统全面升级:
- 后端:
STORAGE_POLICY_MIGRATION任务逻辑完整化,支持同类型后端间的数据物理迁移与元数据同步;v2 upload session 现已按策略能力矩阵分发PROXY / DIRECT_SINGLE / DIRECT_MULTIPART策略。 - 前端视觉:全站 UI 已重构为“Stitch”玻璃拟态 (Glassmorphism) 风格。引入全局
bg-aurora背景、.glass-panel通用样式类、ThemeProvider与ThemeToggle亮暗色切换。 - 前端模块:网盘、快传、分享、任务、回收站、移动端布局、管理台 Dashboard、用户、文件、存储策略等所有核心视图均已完成视觉重构,在保持原有数据绑定与逻辑闭环的前提下,实现了极高质感的 UI 表现。
- 前端技术栈:由于
front/根目录不直接由 UI 框架管理,通过src/components/及其对应 hooks/lib 实现了一套自定义的主题与玻璃态组件库,并解决了 overhaul 过程中引入的所有 TypeScript / Lint 缺失引用问题。
- 后端:
- 2026-04-10 Cloudreve gap 后端升级计划已完成 Stage 1 第一批:
- 新增 Spring Cache 与 Spring Data Redis 依赖,
application.yml/application-dev.yml增加spring.data.redis.*与默认关闭的app.redis.*配置骨架;spring.data.redis.repositories.enabled=false,当前不启用 Redis repository。 - 新增
AppRedisProperties、RedisConfiguration、RedisCacheNames,把 Redis 使用边界拆成cache/auth/transfer-sessions/upload-state/locks/file-events/broker命名空间;Redis 关闭时回退到NoOpCacheManager,不强依赖本地或 dev 环境外部 Redis。 - 新增
AuthTokenInvalidationService:Redis 启用时按userId + clientType写入 access token 的失效时间标记,并把被撤销 refresh token 的 hash 以剩余有效期 TTL 写入 Redis 黑名单;Redis 关闭时自动使用 no-op 实现。 AuthService的同端重登与改密、AdminService的封禁/改密/重置密码、RefreshTokenService的轮换/批量撤销/过期拒绝,现已统一接到这套 Redis 登录态失效层。JwtAuthenticationFilter现在会在原有 JWT +sid校验前先检查 Redis access token 失效标记;快传 session、热目录缓存、分布式锁、文件事件跨实例广播和轻量 broker 仍留在后续 Stage 1 小步。
- 新增 Spring Cache 与 Spring Data Redis 依赖,
2026-04-10 Stage 1 Batch 2
/api/files/list现已接入可选 Redis 热目录分页缓存,缓存 key 固定包含userId + path + page + size + sort context + directory version,并在创建、删除、移动、复制、重命名、恢复、上传完成和导入后按目录版本精准失效。- 第一批分布式锁已落在回收站恢复路径,
FileService.restoreFromRecycleBin(...)通过 Redislocks命名空间做带 TTL 和 owner token 的互斥,避免同一条目被并发恢复。 - 上传会话短状态现已进入 Redis
upload-state命名空间,UploadSessionService会在创建、上传中、完成、取消、失败、过期时刷新运行态;GET /api/v2/files/upload-sessions/{sessionId}响应新增runtime字段,前端可直接读取 phase、uploadedBytes、uploadedPartCount、progressPercent、lastUpdatedAt、expiresAt。 - 这一批后端升级已通过
cd backend && mvn test全量验证,结果为 277 tests passed。
2026-04-10 Stage 1 Batch 3
- Stage 1 Step 7 已落地首批轻量 broker:新增
LightweightBrokerService抽象,Redis 启用时走 Redis list,Redis 关闭时回退到内存队列,继续支持本地单实例开发和测试。 - 当前 broker 的首个真实用例是媒体任务自动触发:
FileService.saveFileMetadata(...)会在媒体文件元数据落库并提交事务后,通过MediaMetadataTaskBrokerPublisher发布media-metadata-trigger。 MediaMetadataTaskBrokerConsumer会批量 drain 这类消息,并调用BackgroundTaskService.createQueuedAutoMediaMetadataTask(...)创建MEDIA_META后台任务;创建前会按correlationId去重,并重新校验文件仍存在、未删除且仍是媒体文件。- 这批 broker 明确不是高可靠消息系统,也不替代现有数据库
BackgroundTaskworker;文件事件跨实例广播仍留给 Stage 1 Step 9 的 Redis pub/sub。 - 本批次新增/更新测试后,
cd backend && mvn test已通过,结果为 281 tests passed。
2026-04-10 Stage 1 Batch 4
- Stage 1 Step 8 已完成:在线快传
TransferSessionStore不再只依赖进程内ConcurrentHashMap,Redis 启用时会把 session 快照与pickupCode -> sessionId映射写入transfer-sessions命名空间;Redis 关闭时自动回退到内存模式。 TransferSession新增内部快照序列化形状,保留receiverJoined、信令队列、cursor 和文件清单等在线运行态;因此joinSession和postSignal在修改在线会话后会重新写回 store,避免 Redis 模式下状态只改在临时副本里。TransferService.nextPickupCode()现已复用 store 侧生成逻辑;Redis 启用时会先对 pickup code 做短 TTL 预留,降低多实例并发创建在线快传 session 的碰撞概率。- 当前这一步只覆盖在线快传跨实例共享;离线快传仍继续走数据库
OfflineTransferSessionRepository,文件事件跨实例广播仍留给 Stage 1 Step 9。 - 本批次补了
TransferServiceTest和TransferSessionStoreTest,并已通过mvn -Dtest=TransferControllerIntegrationTest,TransferServiceTest,TransferSessionStoreTest test与cd backend && mvn test;全量结果为 284 tests passed。
2026-04-10 Stage 1 Batch 5
- Stage 1 Step 9 已完成:文件事件从“仅单实例内存广播”升级为“本地 SSE 广播 + Redis pub/sub 跨实例转发”。本地订阅管理仍留在
FileEventService的内存subscriptions,没有把SseEmitter或订阅状态存进 Redis。 - 新增
FileEventCrossInstancePublisher抽象与 Redis/no-op 双实现;Redis 开启时,RedisFileEventPubSubPublisher会把已提交的FileEvent最小快照发布到keyPrefix:file-events:pubsub,并附带当前实例instanceId。 RedisFileEventPubSubListener会订阅同一 topic,忽略本实例回环消息,只把远端事件重建后交给FileEventService.broadcastReplicatedEvent(...)做本地 SSE 投递,因此不会重复写FileEvent表。- 这批实现明确只解决“多实例下文件事件能到达其它实例上的活跃 SSE 订阅”问题,不提供历史重放、可靠投递或补偿语义;事件持久化事实源仍然是数据库
portal_file_event。 - 验证已覆盖
FileEventServiceTest、RedisFileEventPubSubPublisherTest、RedisFileEventPubSubListenerTest、既有FileEventPersistenceIntegrationTest、FileEventsV2ControllerIntegrationTest,并通过cd backend && mvn test;全量结果更新为 288 tests passed。
2026-04-10 Stage 1 Batch 6
- Stage 1 Step 10 宸插畬鎴愶細
AdminService.listStoragePolicies()鎺ュ叆admin:storage-policiesSpring Cache锛屽悗鍙板瓨鍌ㄧ瓥鐣ュ垪琛ㄧ幇鍦ㄤ細鍦?create/update/status 鍐欐搷浣滃悗鍋?all-entries eviction锛汻edis 鍏抽棴鏃朵粛鑷姩鍥為€€鍒板師鏈夐潪缂撳瓨璇昏矾寰勩€? AndroidReleaseService.getLatestRelease()鐜板凡鎺ュ叆android:releaseSpring Cache锛屽綋鍓嶉€氳繃 TTL 鎺у埗鏁版嵁鍒锋柊锛涘洜涓哄畨鍗撳彂甯冨厓鏁版嵁鏄敱浠撳簱澶栫殑瀵硅薄瀛樺偍鍙戝竷鑴氭湰鏇存柊锛屾病鏈夊悓婧愬啓璺緞鍙互鍦ㄥ悗绔唴閮ㄦ樉寮忓け鏁堛€?admin summary缁忚瘎浼板悗鏆備笉缂撳瓨锛屽洜涓哄叾鍚屾椂鍖呭惈 request count銆乨aily active users銆乭ourly timeline 绛夐珮棰戠粺璁″€硷紝鍋氭樉寮忓け鏁堜細璁╄涔夊彉寰椾笉绋冲畾銆?- 杩欐壒琛ヤ簡
AdminServiceStoragePolicyCacheTest鍜?AndroidReleaseServiceCacheTest锛屽苟閫氳繃mvn -Dtest=AdminControllerIntegrationTest,AndroidReleaseServiceTest,AndroidReleaseControllerTest,AdminServiceStoragePolicyCacheTest,AndroidReleaseServiceCacheTest test涓?cd backend && mvn test锛屽叏閲忕粨鏋滄洿鏂颁负 293 tests passed銆?
2026-04-10 Stage 1 Batch 6 Clarification
- Step 10 is complete.
AdminService.listStoragePolicies()now uses Spring Cacheadmin:storage-policies.- Successful storage policy create, update, and status-change writes evict that cache.
AndroidReleaseService.getLatestRelease()now uses Spring Cacheandroid:release.- Android release metadata refresh is TTL-driven because updates come from the external release publish script writing
android/releases/latest.json. admin summarywas evaluated and intentionally left uncached because it includes high-churn metrics without a clean explicit invalidation boundary.- Verification passed with targeted cache/admin/android tests and full
cd backend && mvn test. - Full backend result after this batch: 293 tests passed.
2026-04-10 Stage 1 Batch 7 Clarification
- Stage 1 Step 11 is complete with a deliberate non-change:
DogeCloudS3SessionProviderstays as a per-instance in-memory runtime cache. - The provider caches a live
S3FileRuntimeSession(S3Client+S3Presigner) and refreshes only when the temporary credentials enter the built-in one-minute refresh window. - Multi-instance duplicate temporary-token fetches were judged acceptable; the repo does not now add Redis-based shared credential caching for DogeCloud temporary S3 sessions.
DogeCloudS3SessionProviderTestnow also covers refresh-time cleanup of the previous runtime session and explicitclose()cleanup.
2026-04-10 Stage 1 Batch 8 Clarification
- Stage 1 Step 12 is complete as a validation closeout batch.
- Local verification passed with full
cd backend && mvn test, keeping the backend suite green at 294 passing tests. - Redis-disabled boot compatibility was also re-checked: with
APP_REDIS_ENABLED=false,APP_JWT_SECRETset, anddevprofile active, the backend booted successfully and reachedStarted PortalBackendApplicationon port18081. - This confirms the new Redis-backed capabilities still preserve the no-Redis local-development path instead of making Redis a hard startup dependency.
- What remains unverified locally is environment-bound rather than code-bound: real Redis end-to-end behavior and multi-instance propagation for pub/sub, lightweight broker consumption, and Redis-backed runtime/session sharing.
2026-04-10 Stage 1 Batch 9 Manual Redis Validation
- Stage 1 manual Redis validation was continued with a real local Redis service plus two backend instances on
18081and18082. - Four real regressions were found and fixed during that validation:
RedisFileEventPubSubPublisherandRedisFileEventPubSubListenerneeded explicit constructor selection for Spring bean creation in Redis-enabled startup.AuthTokenInvalidationServicewas writing revocation cutoffs in milliseconds while JWTiatcomparison effectively worked at second precision, causing fresh tokens to be treated as revoked; it now stores epoch seconds and tolerates old millisecond Redis values.- Redis file list cache needed two runtime fixes: cache serialization must use the application
ObjectMappersoLocalDateTimecan be written, and cache reads must tolerate generic map payloads returned by Redis cache deserialization. portal_file.storage_namewas missing in bothmkdirand normal file upload metadata writes against the current schema, so both paths now persist a non-null legacy storage name.- Manual multi-instance verification that actually passed:
- re-login invalidates the old access token and old refresh token while keeping the latest token usable;
- online transfer lookup still works from instance B after instance A is stopped, proving shared runtime state;
- uploading
image/pngon instance A delivers aCREATEDSSE event to instance B and auto-creates one queuedMEDIA_METAtask visible from instance B. - Backend test count is now 301 passing tests after adding coverage for the new Redis/manual-integration regressions.
- A remaining environment note: direct
redis-clikey scans did not show the expected Redis keys during local probing even though the cross-instance runtime checks proved Redis-backed sharing was active, so runtime behavior is currently the stronger evidence than raw key inspection.
Debugging Discipline
- Use short bounded probes first when validating network, dependency, or startup issues. Prefer commands such as
curl --max-time,mvn -q,mvn dependency:get,apt-get update, and similar narrow checks before launching long-running downloads or full test runs. - Do not wait indefinitely on a stalled download or progress indicator. If a command appears stuck, stop and re-check DNS, proxy inheritance, mirror reachability, and direct-vs-proxy routing before retrying.
- For WSL debugging, verify the proxy path and the direct path separately, then choose the shortest working route. Do not assume a mirror problem until the network path has been isolated.
- Use domestic mirrors as a delivery optimization, not as a substitute for diagnosis. First determine whether the failure is caused by DNS, proxy configuration, upstream availability, or the mirror itself.
2026-04-11 Admin Backend Surface Addendum
- The next backend phase from
2026-04-10-cloudreve-gap-next-phase-upgrade.mdis now underway on the admin surface. AdminControllerandAdminServicenow expose three new admin data areas:GET /api/admin/file-blobs: entity-centric blob inspection acrossFileEntity,StoredFileEntity, andFileBlob, includingblobMissing,orphanRisk, andreferenceMismatchsignals.GET /api/admin/sharesandDELETE /api/admin/shares/{shareId}: admin-side share listing and forced cleanup forFileShareLink.GET /api/admin/tasksandGET /api/admin/tasks/{taskId}: admin-side background task inspection with parsedfailureCategory,retryScheduled,workerOwner, and derivedleaseState.- The blob admin list is intentionally based on
FileEntityinstead ofStoredFileso storage-policy migration and future multi-entity object lifecycles can be inspected without relying on the legacyStoredFile.blobread path. - Old public/user read flows still intentionally depend on
StoredFile.blob; this batch does not yet switch download/share/recycle/zip reads toprimaryEntity. - Verification for this batch passed with:
cd backend && mvn -Dtest=AdminControllerIntegrationTest,AdminServiceTest,AdminServiceStoragePolicyCacheTest testcd backend && mvn test- Full backend result after this addendum: 304 tests passed.
- 2026-04-11 admin backend batch 2 extended the admin surface with
GET /api/admin/settingsandGET /api/admin/filesystem. GET /api/admin/settingsis intentionally read-only and runtime-oriented. It currently exposes invite-code state, configured admin usernames, JWT session timing, Redis-backed token blacklist availability, queue cadence, and server storage/Redis mode.GET /api/admin/filesystemis intentionally operational and read-only. It exposes the active default storage policy snapshot, resolved upload-mode matrix, effective max file size after policy/capability limits, metadata/thumbnail capability flags, cache backend/TTL visibility, aggregate file/blob/entity counts, and the current reserved-offWebDAVstate.- 2026-04-11 admin backend batch 3 pushed
Admin-B1into the first bounded write path:PATCH /api/admin/settings/registration/invite-codeandPOST /api/admin/settings/registration/invite-code/rotatenow manage the persisted invite code throughRegistrationInviteState. GET /api/admin/settingsnow returns per-sectionwriteSupportedflags and a newtransfersection with the persisted offline-transfer storage limit, so the backend explicitly distinguishes writable settings from runtime/environment-derived read-only settings.- Current admin hot-update boundary is now explicit: invite code and offline-transfer storage limit are writable; JWT lifetime, Redis enablement/TTL policy, queue cadence/backend, storage provider, and configured admin usernames remain read-only runtime/config snapshots.
- This batch was verified in WSL with
mvn -Dtest=AdminControllerIntegrationTest,AdminServiceTest,AdminServiceStoragePolicyCacheTest testand fullmvn test; backend total is now 310 passing tests. - WSL-side Maven download failures on 2026-04-11 were traced to missing Maven proxy configuration rather than general network loss. Adding HTTP/HTTPS proxy entries for
127.0.0.1:7890to WSL~/.m2/settings.xmlrestoredmvn validateandmvn test.
2026-04-11 Backend Refactor Batch 1
- A new refactor plan was written to
docs/superpowers/plans/2026-04-11-backend-refactor-plan.mdto lock the next backend cleanup to explicit business rules before further feature work. - Online transfer session mutation now uses
TransferSessionStore.withSession(...)as the atomic read-modify-write entrypoint forjoinSessionandpostSignal.TransferServiceno longer reads the session under a lock and saves it outside the critical section. - Automatic media-metadata task creation now runs under a correlation-scoped distributed lock in
BackgroundTaskService. The current boundary is service-level atomicity aroundcorrelationIdrather than a new database uniqueness constraint. - Lightweight broker delivery for media-metadata triggers now has an explicit
requeue(...)path.MediaMetadataTaskBrokerConsumerdrops malformed payloads, but requeues the payload and stops the current batch when downstream task creation throws. - Regression coverage was added for all three refactor targets:
TransferServiceTestnow asserts online-session mutation goes through the atomic store entrypoint.BackgroundTaskServiceTestnow asserts correlation-scoped locking around auto media task creation.MediaMetadataTaskBrokerConsumerTestnow covers both requeue-on-failure and drop-malformed-payload behavior.- Verification passed with targeted tests
mvn "-Dtest=TransferServiceTest,BackgroundTaskServiceTest,MediaMetadataTaskBrokerConsumerTest" test. Full backend regression is the next verification step in this session. - Full backend regression then passed with
cd backend && mvn test; backend total is now 312 passing tests.
2026-04-11 Backend Refactor Batch 2
- The auto media-metadata idempotency boundary is now closed at the database layer rather than only at the Redis lock layer.
portal_background_task.correlation_idnow has a database unique constraint, so cross-instance races cannot create two persisted tasks with the same semantic key even if one transaction has not committed when the next instance acquires the Redis lock.BackgroundTaskService.createQueuedAutoMediaMetadataTask(...)still uses the correlation-scoped distributed lock to reduce duplicate work, but now also forces the auto-media insert tosaveAndFlush(...)inside the locked section and treats duplicate-key failures as an idempotent no-op.- The resulting rule is stricter than the previous batch: for auto-created
MEDIA_METAtasks, correctness no longer depends on Redis lock timing alone; the database is now the final arbiter ofcorrelationIduniqueness. - The lightweight broker poison-message boundary is also tightened:
RedisLightweightBrokerService.poll(...)now drops malformed raw JSON payloads at the broker layer, logs the event, and continues polling later queue entries instead of throwing out of the consumer batch after the bad payload has already been dequeued. MediaMetadataTaskBrokerConsumertherefore now only sees successfully parsed payloads; downstream runtime failures still requeue the payload and stop the current batch, while malformed raw broker payloads are treated as terminal poison messages and isolated locally.- New regression coverage was added in
BackgroundTaskRepositoryIntegrationTestfor the database uniqueness rule and inRedisLightweightBrokerServiceTestfor malformed raw-payload skipping. - Verification passed with
cd backend && mvn "-Dtest=BackgroundTaskServiceTest,BackgroundTaskRepositoryIntegrationTest,RedisLightweightBrokerServiceTest,MediaMetadataTaskBrokerConsumerTest" testand fullcd backend && mvn test; backend total is now 315 passing tests.
2026-04-11 Target Architecture Baseline
docs/architecture.mdhas been repurposed from a near-current-state business summary into the target enterprise business architecture for future refactoring.- Future sessions must not treat
docs/architecture.mdas a plain snapshot of the current implementation. - The document now defines the desired target model: domain-oriented boundaries, unified role model, workspace/content separation, share/transfer separation, unified async job domain, and storage governance as a first-class domain.
- Current implementation details should continue to be discovered from code and
docs/api-reference.md; architectural alignment should be judged against the target-statedocs/architecture.md. - The document scope was further expanded to include three architecture-level appendices that are now part of the baseline itself:
- a rule decision matrix that assigns each rule family to a single owning domain,
- a high-risk test scenario list that defines what the target architecture must be able to defend through automation,
- and a migration / module rollout order that defines the intended landing sequence from current structure to target domains.
2026-04-11 Backend Refactor Batch 3
- The first rule-extraction batch from
docs/superpowers/plans/2026-04-11-backend-refactor-plan.mdis now implemented without changing external API behavior. BackgroundTaskServicenow delegates retry and state-JSON concerns to:BackgroundTaskRetryPolicyBackgroundTaskStateManagerBackgroundTaskStateKeys- File-event flow is now split into:
FileEventServicefor persistence and after-commit orchestrationFileEventDispatcherfor local SSE subscription and dispatchFileEventPayloadCodecfor payload serialization and emitter shapingRedisFileEventPubSubListenernow drops malformed pub/sub payloads locally instead of failing the listener path.- Upload-session flow is now split into:
UploadPolicyResolverfor upload-mode, effective-size, and chunk rulesUploadSessionStateMachinefor lifecycle transitions and write eligibilityUploadSessionServiceas the persistence/runtime coordinator around those rules- Auth session rotation rules are now extracted into
AuthSessionPolicy, used byAuthServicefor single-client rotation and all-session rotation. - New regression tests added:
BackgroundTaskRetryPolicyTestUploadSessionStateMachineTestAuthSessionPolicyTestRedisFileEventPubSubListenerTestmalformed-payload isolation case- Verification passed with:
cd backend && mvn "-Dtest=BackgroundTaskRetryPolicyTest,UploadSessionStateMachineTest,AuthSessionPolicyTest,FileEventServiceTest,RedisFileEventPubSubListenerTest,BackgroundTaskServiceTest,UploadSessionServiceTest,AuthServiceTest" testcd backend && mvn test- Full backend result after this batch: 330 tests passed.
2026-04-11 Backend Refactor Batch 4
- The next admin/auth rule-consolidation batch is now complete as the first direct alignment step against the new target architecture's unified identity/access rules.
AdminAccessEvaluatorno longer depends onapp.admin.usernames; admin-surface access is now derived from authenticated role authorities, withMODERATORandADMINboth treated as management roles for/api/admin/**.GET /api/admin/settingsnow exposesregistration.managementRolesinstead of configured admin usernames, so the admin settings snapshot reflects the runtime authorization model instead of a legacy username whitelist.AdminService.updateUserBanned(...)andAdminService.updateUserPassword(...)now reuseAuthSessionPolicy.rotateAllActiveSessions(...)rather than hand-rolling three UUID rotations inline.- Dev login role mapping was tightened so
admin -> ADMIN,operator/moderator -> MODERATOR, and other dev-login usernames remainUSER. - This batch intentionally did not rename persisted
UserRoleenum values yet; the higher-risk role-model/data-migration step remains deferred until the broader target-architecture identity model is landed deliberately. - Regression coverage was updated across:
AdminControllerIntegrationTestAdminServiceTestAdminServiceStoragePolicyCacheTestAuthServiceTest- Verification passed with:
cd backend && mvn "-Dtest=AdminControllerIntegrationTest,AdminServiceTest,AdminServiceStoragePolicyCacheTest,AuthServiceTest" testcd backend && mvn test- Full backend result after this batch: 332 tests passed.
2026-04-11 Backend Refactor Batch 5
- The next admin refactor batch is now complete around the runtime-snapshot vs mutable-settings boundary.
AdminControllerno longer routes settings/filesystem/invite-code/offline-limit endpoints through the catch-allAdminService.- Read-only admin runtime snapshots now live in
AdminConfigSnapshotService, covering: GET /api/admin/settingsGET /api/admin/filesystem- Mutable admin settings writes now live in
AdminMutableSettingsService, covering: PATCH /api/admin/settings/registration/invite-codePOST /api/admin/settings/registration/invite-code/rotatePATCH /api/admin/settings/offline-transfer-storage-limitAdminServiceis correspondingly narrower again and now focuses on summary, user governance, file/share/task inspection, storage-policy governance, and related admin operations rather than also owning mixed runtime snapshot/config write concerns.- Storage-policy response assembly used by both admin storage-policy management and filesystem snapshot code is now shared through
AdminStoragePolicyResponses, avoiding divergent response shaping during the split. - Regression coverage was split along the same boundary:
AdminConfigSnapshotServiceTestAdminMutableSettingsServiceTest- existing
AdminControllerIntegrationTest - existing
AdminServiceTest - existing
AdminServiceStoragePolicyCacheTest - Verification passed with:
cd backend && mvn "-Dtest=AdminControllerIntegrationTest,AdminConfigSnapshotServiceTest,AdminMutableSettingsServiceTest,AdminServiceTest,AdminServiceStoragePolicyCacheTest" testcd backend && mvn test- Full backend result after this batch: 333 tests passed.
2026-04-11 Backend Refactor Batch 6
- The next admin refactor batch is now complete around user-governance boundary extraction.
AdminControllerno longer routes admin user listing, role updates, ban/unban, password change/reset, storage quota, or max-upload-size writes throughAdminService.- Those user-governance responsibilities now live in
AdminUserGovernanceService, covering: GET /api/admin/usersPATCH /api/admin/users/{userId}/rolePATCH /api/admin/users/{userId}/statusPUT /api/admin/users/{userId}/passwordPATCH /api/admin/users/{userId}/storage-quotaPATCH /api/admin/users/{userId}/max-upload-sizePOST /api/admin/users/{userId}/password/resetAdminUserGovernanceServicenow owns the actual user-governance rules: user lookup, password-strength validation, session rotation throughAuthSessionPolicy, token revocation, used-storage projection, and temporary-password generation.AdminServiceis narrower again and now focuses on admin summary, file/blob/share/task inspection, storage-policy governance, and file deletion instead of also owning mutable user-governance flows.- Regression coverage was realigned to the new boundary:
- new
AdminUserGovernanceServiceTest - updated
AdminServiceTest - updated
AdminServiceStoragePolicyCacheTest - existing
AdminControllerIntegrationTest - Verification passed with:
cd backend && mvn "-Dtest=AdminControllerIntegrationTest,AdminUserGovernanceServiceTest,AdminServiceTest,AdminServiceStoragePolicyCacheTest" testcd backend && mvn test- Full backend result after this batch: 335 tests passed.
2026-04-11 Backend Refactor Batch 7
- The next admin refactor batch is now complete around the remaining governance-write boundary, leaving
AdminServiceas a read-only admin query/orchestration surface. AdminControllerno longer routes resource-deletion or storage-governance writes throughAdminService.- Resource-deletion writes now live in
AdminResourceGovernanceService, covering: DELETE /api/admin/shares/{shareId}DELETE /api/admin/files/{fileId}- Storage-governance writes now live in
AdminStorageGovernanceService, covering: POST /api/admin/storage-policiesPUT /api/admin/storage-policies/{policyId}PATCH /api/admin/storage-policies/{policyId}/statusPOST /api/admin/storage-policies/migrationsAdminStorageGovernanceServicenow owns storage-policy validation, persistence, cache eviction, and storage-policy migration-task creation, whileAdminServicekeeps only admin read paths such as summary, file/blob/share/task inspection, and storage-policy list snapshots.AdminServiceStoragePolicyCacheTestwas updated to verify the intended new boundary explicitly: cached storage-policy reads still come fromAdminService, and cache eviction now happens whenAdminStorageGovernanceServiceperforms writes.- New regression coverage was added in:
AdminResourceGovernanceServiceTestAdminStorageGovernanceServiceTest- updated
AdminServiceTest - updated
AdminServiceStoragePolicyCacheTest - existing
AdminControllerIntegrationTest - Verification passed with:
cd backend && mvn "-Dtest=AdminControllerIntegrationTest,AdminResourceGovernanceServiceTest,AdminStorageGovernanceServiceTest,AdminUserGovernanceServiceTest,AdminServiceTest,AdminServiceStoragePolicyCacheTest" testcd backend && mvn test- Full backend result after this batch: 337 tests passed.
2026-04-11 Backend Refactor Batch 8
- The next admin refactor batch is now complete around read-side thematic decomposition;
AdminServicehas been removed and replaced by explicit query services. AdminControllernow routes read endpoints through dedicated query services:AdminInspectionQueryService:GET /api/admin/summaryGET /api/admin/filesGET /api/admin/file-blobsGET /api/admin/sharesAdminTaskQueryService:GET /api/admin/tasksGET /api/admin/tasks/{taskId}AdminStoragePolicyQueryService:GET /api/admin/storage-policies- Write paths remain in the previously extracted governance services:
AdminUserGovernanceServiceAdminResourceGovernanceServiceAdminStorageGovernanceService- This leaves the admin surface with clear read/write service boundaries by responsibility, instead of a mixed read-orchestration class.
- Regression coverage was realigned to the new read-side services:
- new
AdminInspectionQueryServiceTest - new
AdminTaskQueryServiceTest - new
AdminStoragePolicyQueryServiceCacheTest - existing
AdminControllerIntegrationTest - existing governance-service tests
- Verification passed with:
cd backend && mvn "-Dtest=AdminInspectionQueryServiceTest,AdminTaskQueryServiceTest,AdminStoragePolicyQueryServiceCacheTest,AdminResourceGovernanceServiceTest,AdminStorageGovernanceServiceTest,AdminUserGovernanceServiceTest,AdminControllerIntegrationTest" testcd backend && mvn test- Full backend result after this batch: 339 tests passed.
2026-04-11 Backend Refactor Batch 9
- The remaining Stage-7 admin item around explicit audit capability is now implemented.
- New audit domain pieces were added:
AdminAuditService(write-side audit recording)AdminAuditLogEntity+AdminAuditLogRepositoryAdminAuditQueryService+AdminAuditLogResponseAdminControllernow exposesGET /api/admin/auditsfor paged audit-log queries with filters:actorQueryactionTypetargetTypetargetId- Governance write services now emit explicit audit records after successful writes:
AdminMutableSettingsServiceAdminUserGovernanceServiceAdminResourceGovernanceServiceAdminStorageGovernanceService- This keeps admin write rules in governance services while making audit a first-class, explicit admin capability instead of implicit side effects.
- Regression coverage added/updated in:
- new
AdminAuditServiceTest - new
AdminAuditQueryServiceTest - updated
AdminControllerIntegrationTest - updated governance-service unit tests and cache test wiring for the new audit dependency
- Verification passed with:
cd backend && mvn "-Dtest=AdminAuditServiceTest,AdminAuditQueryServiceTest,AdminMutableSettingsServiceTest,AdminUserGovernanceServiceTest,AdminResourceGovernanceServiceTest,AdminStorageGovernanceServiceTest,AdminStoragePolicyQueryServiceCacheTest,AdminControllerIntegrationTest" test
2026-04-11 Backend Refactor Batch 10
- The Stage-6 async-job direction is now advanced with an explicit command-vs-execution entry split, while preserving existing task behavior.
- New services were introduced:
BackgroundTaskCommandServiceBackgroundTaskExecutionService- Routing updates now use those boundaries:
BackgroundTaskV2Controllernow depends onBackgroundTaskCommandServicefor user command/query flows (create/list/get/cancel/retry).BackgroundTaskWorkernow depends onBackgroundTaskExecutionServicefor queue scanning, claim, heartbeat/progress, completion, and failure transitions.BackgroundTaskStartupRecoverynow depends onBackgroundTaskExecutionServicefor expired-running-task recovery.MediaMetadataTaskBrokerConsumernow depends onBackgroundTaskCommandServicefor auto media-metadata task creation.AdminStorageGovernanceServicenow usesBackgroundTaskCommandServicewhen creating storage-policy migration tasks.- This batch keeps the existing
BackgroundTaskServiceimplementation intact as the internal rule engine, but external orchestration boundaries now explicitly separate command-oriented and execution-oriented entrypoints. - Regression tests were updated for the new boundaries in:
BackgroundTaskWorkerTestMediaMetadataTaskBrokerConsumerTestAdminStorageGovernanceServiceTestAdminStoragePolicyQueryServiceCacheTest- plus integration coverage remained green for:
BackgroundTaskV2ControllerIntegrationTestAdminControllerIntegrationTest- Verification passed with:
cd backend && mvn "-Dtest=BackgroundTaskWorkerTest,MediaMetadataTaskBrokerConsumerTest,BackgroundTaskV2ControllerIntegrationTest,AdminStorageGovernanceServiceTest,AdminStoragePolicyQueryServiceCacheTest,AdminControllerIntegrationTest" test
2026-04-11 Backend Refactor Batch 11
- Stage-6 async-job refactor continued with execution boundary hardening and state-transition consolidation.
BackgroundTaskExecutionServicenow has explicit transactional boundaries on execution write paths used directly by worker/startup flows:requeueExpiredRunningTasksclaimQueuedTaskmarkWorkerTaskProgressmarkWorkerTaskCompletedmarkWorkerTaskFailedBackgroundTaskServicenow acceptsBackgroundTaskExecutionServiceas an explicit dependency at the primary Spring constructor boundary (instead of only relying on an internally constructed helper instance), and stale execution-only private helpers were removed fromBackgroundTaskService.- Execution-side state-key coupling was reduced:
BackgroundTaskExecutionServiceBackgroundTaskWorkerStoragePolicyMigrationBackgroundTaskHandler- now reference
BackgroundTaskStateKeysdirectly instead ofBackgroundTaskService.STATE_*aliases. - Public-state transition patch assembly was further consolidated into
BackgroundTaskStateManagerwith explicit helpers: cancelledStatePatchcompletedStatePatchfailedStatePatchretryQueuedStatePatch- This removes additional scattered
Map.of(...)state-transition literals from service/worker write paths and advances the plan item of gradually replacing broad ad-hoc JSON merge usage with typed transition entrypoints. - New regression coverage added:
BackgroundTaskStateManagerTest- Verification passed with:
cd backend && mvn "-Dtest=BackgroundTaskStateManagerTest,BackgroundTaskServiceTest,BackgroundTaskWorkerTest,MediaMetadataTaskBrokerConsumerTest,BackgroundTaskV2ControllerIntegrationTest,AdminStorageGovernanceServiceTest,AdminStoragePolicyQueryServiceCacheTest,AdminControllerIntegrationTest" test- Full targeted result for this batch: 76 tests run, 0 failures.
- Full backend regression also passed with:
cd backend && mvn test- Backend total after this batch: 348 tests passed.
2026-04-11 Backend Refactor Batch 12
- Stage-6 async-job boundary thinning continued:
BackgroundTaskServiceno longer exposes worker execution lifecycle methods (requeue/findQueued/claim/progress/complete/fail) and now remains on command/query orchestration responsibilities. - Execution lifecycle ownership is now explicit at service boundaries:
BackgroundTaskWorkerandBackgroundTaskStartupRecoverycontinue to useBackgroundTaskExecutionServicedirectly for execution-state transitions.BackgroundTaskServiceTestexecution-lifecycle assertions were re-routed to callBackgroundTaskExecutionServicedirectly, preserving behavioral coverage while keeping command-service boundaries clear.- Handler-side state parsing was further consolidated into
BackgroundTaskStateManager: - new reusable helpers were added:
parseJsonObject(...),mergeJsonObjects(...),readLong(...), andreadText(...). ArchiveBackgroundTaskHandler,ExtractBackgroundTaskHandler,MediaMetadataBackgroundTaskHandler, andStoragePolicyMigrationBackgroundTaskHandlerno longer keep duplicated per-handler JSON parse/extract boilerplate; they now delegate state decode and primitive extraction toBackgroundTaskStateManager.- Related handler tests were updated to construct handlers with
BackgroundTaskStateManagerinstead of rawObjectMapper. - Verification passed with:
cd backend && mvn "-Dtest=BackgroundTaskServiceTest,BackgroundTaskWorkerTest,BackgroundTaskArchiveHandlerTest,ExtractBackgroundTaskHandlerTest,MediaMetadataBackgroundTaskHandlerTest,StoragePolicyMigrationBackgroundTaskHandlerTest,MediaMetadataTaskBrokerConsumerTest,BackgroundTaskV2ControllerIntegrationTest,AdminStorageGovernanceServiceTest,AdminStoragePolicyQueryServiceCacheTest,AdminControllerIntegrationTest" test- full regression
cd backend && mvn test - Backend total after this batch remains 348 passing tests.
2026-04-11 Backend Refactor Batch 13
- Stage-2 (workspace/content-asset split) first-cut rule extraction is now started in
files.corewithout changing API behavior. - New
WorkspaceNodeRulesServicehas been introduced to host workspace-node rule logic that was previously embedded insideFileService, including: - directory-path normalization (
normalizeDirectoryPath) - leaf-name and upload-filename normalization (
normalizeLeafName,normalizeUploadFilename) - path helpers (
extractParentPath,extractLeafName,buildTargetLogicalPath) - directory hierarchy checks/build-up (
ensureDirectoryHierarchy,ensureExistingDirectoryPath) FileServicenow delegates those workspace-rule responsibilities throughWorkspaceNodeRulesService, reducing direct rule ownership in the orchestration service while keeping existing external behavior intact.- New focused regression coverage was added in:
WorkspaceNodeRulesServiceTest- Existing
FileServiceTestremained green to confirm behavior compatibility after delegation. - Verification passed with:
cd backend && mvn "-Dtest=WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 352 tests passed.
2026-04-11 Backend Refactor Batch 14
- Stage-2 read/write rule thinning continued in
files.coreby further moving workspace conflict checks out ofFileService. WorkspaceNodeRulesServicenow also owns:- sibling-name existence query (
existsNodeName) - standardized conflict assertion (
ensureNodeNameAvailable) - recycle-restore target conflict validation (
validateRecycleRestoreTargets) FileServiceconflict checks formkdir/rename/move/copy/ upload pre-check / external-import pre-check now delegate toWorkspaceNodeRulesService, reducing duplicated repository-level rule literals in orchestration code.FileService.validateRecycleRestoreTargets(...)is now only an adapter that delegates toWorkspaceNodeRulesServicewithrequireRecycleOriginalPath(...)resolver.- Regression coverage was extended in
WorkspaceNodeRulesServiceTestwith: - conflict assertion behavior (
ensureNodeNameAvailable) - recycle-restore conflict behavior (
validateRecycleRestoreTargets) - Verification passed with:
cd backend && mvn "-Dtest=WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 354 tests passed.
2026-04-11 Backend Refactor Batch 15
- Stage-2 content-asset boundary extraction continued with a first-cut content-binding service split.
- New
ContentAssetBindingServicehas been added infiles.coreto own content-asset binding rules that were previously embedded inFileService, including: - primary-entity create-or-reference behavior (
createOrReferencePrimaryEntity) - default storage-policy capability projection for upload mode selection (
resolveDefaultStoragePolicyCapabilities) StoredFile->FileEntityprimary relation persistence (savePrimaryEntityRelation)FileServicenow delegates those content-binding rules throughContentAssetBindingService, further narrowingFileServicetoward orchestration across workspace/content/storage concerns.- New regression coverage was added in:
ContentAssetBindingServiceTest- Existing rule-split tests remained green:
WorkspaceNodeRulesServiceTestFileServiceTest- Verification passed with:
cd backend && mvn "-Dtest=ContentAssetBindingServiceTest,WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 357 tests passed.
2026-04-11 Backend Refactor Batch 16
- Stage-2 (workspace/content-asset split) continued with blob lifecycle rule extraction.
- New
ContentBlobLifecycleServicehas been added infiles.coreto own blob lifecycle rules previously embedded insideFileService, including: - post-write rollback guard (
executeAfterBlobStored) - batch cleanup rollback for external-import partial writes (
cleanupWrittenBlobs) - blob metadata persistence (
createAndSaveBlob) - required blob assertion for file-content reads (
getRequiredBlob) - blob deletion candidate aggregation by remaining references (
collectBlobsToDelete) - physical blob + metadata deletion (
deleteBlobs) FileServicenow delegates blob lifecycle operations throughContentBlobLifecycleServiceacross:- normal upload and direct-upload completion
- external single-file and batch import
- recycle-bin expiry prune
- file download URL/body reads and archive read/write paths
- New focused regression coverage was added in:
ContentBlobLifecycleServiceTest- Existing split-compat tests remained green:
ContentAssetBindingServiceTestWorkspaceNodeRulesServiceTestFileServiceTest- Verification passed with:
cd backend && mvn "-Dtest=ContentBlobLifecycleServiceTest,ContentAssetBindingServiceTest,WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 365 tests passed.
2026-04-11 Backend Refactor Batch 17
- Stage-2 continued with upload/quota rule boundary extraction from
FileService. - New
FileUploadRulesServicehas been added infiles.coreto own upload admission rules that were still embedded in orchestration code, including: - effective max upload-size resolution across system limit, user limit, default storage-policy max size, and storage-policy capability
maxObjectSize - filename/path conflict check via workspace node rules
- user storage-quota guard (
sumFileSizeByUserId+ overflow-safe additional-bytes check) FileServiceupload/read-write paths now callFileUploadRulesServicedirectly for:- standard upload
- direct-upload initiate/complete validation
- copy/restore/external-import quota checks
- shared-file import and zip-import upload admission checks
- Existing fallback private helpers remain but are now gated behind explicit delegation to
FileUploadRulesService, so active rule ownership is centralized in the extracted service. - New focused regression coverage added in:
FileUploadRulesServiceTest- Existing Stage-2 split tests remained green:
ContentBlobLifecycleServiceTestContentAssetBindingServiceTestWorkspaceNodeRulesServiceTestFileServiceTest- Verification passed with:
cd backend && mvn "-Dtest=FileUploadRulesServiceTest,ContentBlobLifecycleServiceTest,ContentAssetBindingServiceTest,WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 368 tests passed.
2026-04-11 Backend Refactor Batch 18
- Stage-2 continued with external-import rule extraction from
FileService. - New
ExternalImportRulesServicehas been added infiles.coreto own external-import normalization and batch validation rules that were previously embedded in orchestration code, including: - directory normalization + canonical ordering for batch import
- import file descriptor normalization (path/name/content-type/content fallback)
- batch-level target conflict checks (directory/file planned target collisions)
- batch quota validation through
FileUploadRulesService FileService#importExternalFilesAtomically(...)now routes normalization and batch validation throughExternalImportRulesService, keeping blob write + metadata orchestration inFileServicewhile moving import-rule ownership into a dedicated rule service.- New focused regression coverage added in:
ExternalImportRulesServiceTest- Existing Stage-2 split tests remained green:
FileUploadRulesServiceTestContentBlobLifecycleServiceTestContentAssetBindingServiceTestWorkspaceNodeRulesServiceTestFileServiceTest- Verification passed with:
cd backend && mvn "-Dtest=ExternalImportRulesServiceTest,FileUploadRulesServiceTest,ContentBlobLifecycleServiceTest,ContentAssetBindingServiceTest,WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 371 tests passed.
2026-04-11 Backend Refactor Batch 19
- Stage-3 upload rule convergence advanced by making upload-admission rules reusable across
files.coreandfiles.upload. WorkspaceNodeRulesServiceandFileUploadRulesServiceare now explicit reusable rule services (public boundary), so upload-session flows can consume the same normalized path/name + quota + conflict + max-size rules used byFileService.UploadSessionServicenow delegates create-session target admission to the shared rule services instead of keeping its own duplicated checks:- path/name normalization now routes through
WorkspaceNodeRulesService - upload admission (effective max-size + same-directory conflict + quota) now routes through
FileUploadRulesService - local duplicated methods in
UploadSessionServicewere removed: validateTarget(...)rule literals- local
normalizeDirectoryPath(...) - local
normalizeLeafName(...) - This keeps v2 upload-session command flow behavior unchanged while moving rule ownership to a single shared entry point.
- Verification passed with:
cd backend && mvn "-Dtest=UploadSessionServiceTest,UploadSessionV2ControllerTest,FileUploadRulesServiceTest,WorkspaceNodeRulesServiceTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch remains 371 passing tests.
2026-04-12 Backend Refactor Batch 20
- Stage-4 share-domain convergence continued by thinning the legacy
/api/files/share-links/**path into a compatibility layer that reuses v2 share governance rules. FileControllerlegacy share read/import endpoints now delegate toShareV2Serviceinstead of directly calling legacyFileServiceshare read/import logic:GET /api/files/share-links/{token}POST /api/files/share-links/{token}/import- Legacy-vs-v2 error semantics are bridged in
FileControllervia explicitApiV2Exception -> BusinessExceptionmapping, so old endpoints keepErrorCoderesponse envelopes while enforcing v2 policies. - Legacy share behavior is now aligned with v2 governance for critical controls:
- password-protected shares are no longer bypassable through legacy endpoints
allowImportpolicy and quota checks are enforced on legacy import path through v2 service rules- New integration coverage added in
FileShareControllerIntegrationTest: - reject password-protected v2 shares on legacy read/import endpoints
- reject legacy import when v2 share has
allowImport=false - Verification passed with:
cd backend && mvn "-Dtest=FileShareControllerIntegrationTest,ShareV2ControllerIntegrationTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 373 tests passed.
2026-04-12 Backend Refactor Batch 21
- Stage-4 share-domain convergence is now completed for legacy share create/read/import compatibility endpoints.
FileControllerlegacy share-create endpoint now delegates toShareV2Serviceinstead of legacyFileServicelogic:POST /api/files/{fileId}/share-links- Legacy response shape is still preserved via explicit mapping from
ShareV2ResponsetoCreateFileShareLinkResponse. - Legacy-vs-v2 error semantics are now uniformly bridged for create/read/import through
ApiV2Exception -> BusinessExceptionmapping inFileController. - New integration coverage added in
FileShareControllerIntegrationTest: - reject legacy share creation for directory targets through unified v2 share rules (
BAD_REQUEST -> legacy code=1000mapping path) - Verification passed with:
cd backend && mvn "-Dtest=FileShareControllerIntegrationTest,ShareV2ControllerIntegrationTest,FileServiceTest" test- full regression
cd backend && mvn test - Backend total after this batch: 374 tests passed.
2026-04-12 Backend Refactor Batch 22
- Stage-5 transfer-domain decomposition advanced with explicit service boundaries while preserving controller API contracts.
TransferServiceis now a thin orchestration facade, and transfer responsibilities were split into dedicated services:OnlineTransferService: online session create/lookup/join/signal/poll + atomic session-store mutation entrypoints.OfflineTransferService: offline session create/lookup/join/list/upload/download + expiry cleanup and ready-file access.OfflineTransferQuotaService: offline upload admission rules (size/mismatch/global offline storage limit).TransferImportService: offline file import orchestration into workspace/content flow viaFileService.importExternalFile(...).- Existing
/api/transfer/**endpoints remain unchanged inTransferController; behavior is preserved through delegation at service boundaries. - Transfer tests were realigned with the new boundaries:
OnlineTransferServiceTestadded for atomic online session mutation checks (withSession(...)path).TransferServiceTestnow verifies orchestration routing and offline-auth boundary on create-session.- Existing integration coverage remained green in
TransferControllerIntegrationTest. - Verification passed with:
cd backend && mvn "-Dtest=TransferControllerIntegrationTest,TransferServiceTest,OnlineTransferServiceTest,TransferSessionStoreTest" test- full regression
cd backend && mvn test - Backend total after this batch: 377 tests passed.
2026-04-12 Frontend Refactor Batch 23
- Stage-8 frontend domain regroup has started with transfer-domain entrypoint extraction while preserving route/API behavior.
- Transfer domain files were reorganized:
front/src/transfer/api/transfer.tsnow owns transfer API helpers and transfer types.front/src/transfer/pages/TransferPage.tsxnow owns the transfer page implementation.- Compatibility shims were kept to avoid breaking legacy imports during staged migration:
front/src/lib/transfer.tsnow re-exports fromfront/src/transfer/api/transfer.tsfront/src/pages/Transfer.tsxnow re-exports fromfront/src/transfer/pages/TransferPage.tsx- Router domain entry now points to the transfer domain page directly in
front/src/App.tsx. - Verification:
cd front && npm run lintcurrently fails due pre-existing type-check issues unrelated to this batch:src/components/upload/UploadCenter.tsxeffect cleanup return typesrc/hooks/use-directory-data.tseffect cleanup return typesrc/hooks/use-session-runtime.tseffect cleanup return typecd front && npm run buildpassed (verified with sandbox-external execution where needed due local spawn permission limits).
2026-04-12 Frontend Refactor Batch 24
- Frontend verification baseline was repaired so Stage-8 iteration can keep using repo-defined checks cleanly.
- Fixed
useEffectcleanup typing in runtime/cache subscribe paths by ensuring cleanup callbacks returnvoidinstead ofboolean: front/src/lib/upload-runtime.tsfront/src/lib/files-cache.tsfront/src/lib/session-runtime.ts- This resolves the pre-existing
EffectCallbacktype errors in: src/components/upload/UploadCenter.tsxsrc/hooks/use-directory-data.tssrc/hooks/use-session-runtime.ts- Verification passed with:
cd front && npm run lintcd front && npm run build
2026-04-12 Frontend Refactor Batch 25
- Stage-8 frontend domain regroup continued with route-level domain entry migration (while retaining compatibility shims for phased file moves).
- Added domain page entry wrappers:
front/src/account/pages/LoginPage.tsxfront/src/workspace/pages/OverviewPage.tsxfront/src/workspace/pages/FilesPage.tsxfront/src/workspace/pages/RecycleBinPage.tsxfront/src/sharing/pages/SharesPage.tsxfront/src/sharing/pages/FileSharePage.tsxfront/src/common/pages/TasksPage.tsx- App routing imports in
front/src/App.tsxnow consume domain entrypoints instead of directly binding to legacysrc/pages/*. - Transfer domain route/API entry continues to use:
front/src/transfer/pages/TransferPage.tsxfront/src/transfer/api/transfer.ts- Compatibility shims remain in place (
src/pages/Transfer.tsx,src/lib/transfer.ts) to reduce migration blast radius while allowing progressive internal moves. - Verification passed with:
cd front && npm run lintcd front && npm run build