feat: add registry config, image upload/download, and OCI format support
Backend: - Add registry_address configuration API (GET/POST) - Add tar image upload with OCI and Docker format support - Add image download with streaming optimization - Fix blob download using c.Send (Fiber v3 SendStream bug) - Add registry_address prefix stripping for all OCI v2 endpoints - Add AGENTS.md for project documentation Frontend: - Add settings store with Snackbar notifications - Add image upload dialog with progress bar - Add download state tracking with multi-stage feedback - Replace alert() with MUI Snackbar messages - Display image names without registry_address prefix 🤖 Generated with [Qoder](https://qoder.com)
This commit is contained in:
167
AGENTS.md
Normal file
167
AGENTS.md
Normal file
@@ -0,0 +1,167 @@
|
||||
# AGENTS.md
|
||||
|
||||
This file provides guidance to Qoder (qoder.com) when working with code in this repository.
|
||||
|
||||
## Project Overview
|
||||
|
||||
Cluster is a lightweight OCI (Open Container Initiative) registry implementation with a Go backend (Fiber v3) and React/TypeScript frontend. It provides Docker registry v2 API compliance for container image storage and management.
|
||||
|
||||
## Build and Development Commands
|
||||
|
||||
### Backend (Go)
|
||||
|
||||
```bash
|
||||
# Build the binary
|
||||
go build -o cluster .
|
||||
|
||||
# Run with default settings
|
||||
./cluster
|
||||
|
||||
# Run with custom configuration
|
||||
./cluster -debug -address 0.0.0.0:8080 -data-dir ./x-storage
|
||||
|
||||
# Run tests
|
||||
go test ./pkg/tool/...
|
||||
|
||||
# Update dependencies
|
||||
go mod tidy
|
||||
```
|
||||
|
||||
**Command-line flags:**
|
||||
- `-debug`: Enable debug logging
|
||||
- `-address` / `-A`: Server listen address (default: 0.0.0.0:9119)
|
||||
- `-data-dir` / `-D`: Data directory for storage (default: ./x-storage)
|
||||
|
||||
### Frontend (React/TypeScript)
|
||||
|
||||
**Package Manager:** This project uses `pnpm` exclusively. Do not use npm or yarn.
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
|
||||
# Install dependencies
|
||||
pnpm install
|
||||
|
||||
# Development server (runs on http://localhost:3000)
|
||||
pnpm dev
|
||||
|
||||
# Lint TypeScript/React code
|
||||
pnpm lint
|
||||
|
||||
# Production build
|
||||
pnpm run build
|
||||
|
||||
# Preview production build
|
||||
pnpm preview
|
||||
```
|
||||
|
||||
### Full Stack Development
|
||||
|
||||
```bash
|
||||
# Run both backend and frontend concurrently
|
||||
./dev.sh
|
||||
```
|
||||
|
||||
The `dev.sh` script starts the Go backend and Vite dev server together, handling graceful shutdown on Ctrl+C.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Request Flow
|
||||
|
||||
1. **Entry Point** (`main.go`): Initializes signal handling and delegates to cobra command
|
||||
2. **Command Layer** (`internal/cmd/cmd.go`): Manages application lifecycle
|
||||
- Initializes configuration (`internal/opt`)
|
||||
- Initializes SQLite database (`pkg/database/db`) at `{data-dir}/cluster.db`
|
||||
- Initializes filesystem storage (`pkg/store`) for blobs/manifests
|
||||
- Starts Fiber HTTP server (`internal/api`)
|
||||
3. **API Layer** (`internal/api/api.go`): Routes requests to handlers
|
||||
- OCI Registry v2 endpoints: `/v2/*` routes to `internal/module/registry`
|
||||
- Custom API v1 endpoints: `/api/v1/registry/*` for image listing and config
|
||||
4. **Handler Layer** (`internal/module/registry/`): Implements OCI registry operations
|
||||
- `registry.go`: Main routing logic
|
||||
- `blob.go`: Blob upload/download/chunked upload
|
||||
- `manifest.go`: Manifest storage and retrieval
|
||||
- `tag.go`: Tag management
|
||||
- `catalog.go`: Repository listing
|
||||
|
||||
### Storage Architecture
|
||||
|
||||
**Database** (`pkg/database/db`):
|
||||
- SQLite via GORM
|
||||
- Stores: repositories, tags, manifest metadata, registry config
|
||||
- Location: `{data-dir}/cluster.db`
|
||||
|
||||
**Filesystem Storage** (`pkg/store`):
|
||||
- Content-addressed storage with SHA256
|
||||
- Structure:
|
||||
- `registry/blobs/sha256/{2-char-prefix}/{2-char-prefix}/{full-hash}` - Blob files
|
||||
- `registry/manifests/sha256/{2-char-prefix}/{2-char-prefix}/{full-hash}` - Manifest files
|
||||
- `registry/uploads/{uuid}` - Temporary upload files
|
||||
- All write operations verify SHA256 digest before finalizing
|
||||
|
||||
### Frontend Architecture
|
||||
|
||||
**Tech Stack:**
|
||||
- React 18 + TypeScript
|
||||
- Vite build tool
|
||||
- Zustand state management (`src/stores/`)
|
||||
- Material-UI (MUI) components
|
||||
- React Router for routing
|
||||
|
||||
**API Proxy:** Dev server proxies `/api/*` requests to backend (configured in `vite.config.ts`)
|
||||
|
||||
### Key Packages
|
||||
|
||||
- `pkg/tool/`: Utility functions (UUID generation, password hashing, file helpers, table formatting, etc.)
|
||||
- `pkg/resp/`: HTTP response helpers and i18n
|
||||
- `internal/middleware/`: CORS, logging, recovery, repository context injection
|
||||
- `internal/model/`: GORM database models
|
||||
|
||||
## Code Conventions
|
||||
|
||||
### Go
|
||||
|
||||
- Use GORM for database operations via `db.Default` singleton
|
||||
- Use `pkg/store.Default` for filesystem operations
|
||||
- Handlers receive `(ctx context.Context, db *gorm.DB, store store.Store)` and return `fiber.Handler`
|
||||
- Middleware uses Fiber v3 context
|
||||
- Digest format is always `sha256:hexstring`
|
||||
- All blob/manifest writes must verify digest before finalizing
|
||||
|
||||
### Frontend
|
||||
|
||||
- Use Zustand for state management
|
||||
- Follow MUI theming patterns defined in `src/theme.ts`
|
||||
- TypeScript strict mode enabled
|
||||
- Components in `src/components/`, pages in `src/pages/`
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# Run Go tests
|
||||
go test ./pkg/tool/...
|
||||
|
||||
# Frontend linting (no test framework currently configured)
|
||||
cd frontend && pnpm lint
|
||||
```
|
||||
|
||||
## OCI Registry v2 API Endpoints
|
||||
|
||||
- `GET /v2/` - Registry version check
|
||||
- `POST /v2/{repo}/blobs/uploads/` - Initiate blob upload, returns upload UUID
|
||||
- `PATCH /v2/{repo}/blobs/uploads/{uuid}` - Upload blob chunk (chunked upload)
|
||||
- `PUT /v2/{repo}/blobs/uploads/{uuid}?digest={digest}` - Finalize blob upload
|
||||
- `GET /v2/{repo}/blobs/{digest}` - Download blob
|
||||
- `HEAD /v2/{repo}/blobs/{digest}` - Check blob existence
|
||||
- `PUT /v2/{repo}/manifests/{tag}` - Store manifest
|
||||
- `GET /v2/{repo}/manifests/{tag}` - Retrieve manifest
|
||||
- `DELETE /v2/{repo}/manifests/{tag}` - Delete manifest
|
||||
- `GET /v2/{repo}/tags/list` - List repository tags
|
||||
- `GET /v2/_catalog` - List all repositories
|
||||
|
||||
## Custom API v1 Endpoints
|
||||
|
||||
- `GET /api/v1/registry/image/list` - List images with metadata
|
||||
- `GET /api/v1/registry/image/download/*` - Download images
|
||||
- `GET /api/v1/registry/config` - Get registry configuration
|
||||
- `POST /api/v1/registry/config` - Set registry configuration
|
||||
103
dev.sh
103
dev.sh
@@ -1,78 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
# Run backend (Go) and frontend (Vite) together; stop both on Ctrl+C
|
||||
set -euo pipefail
|
||||
|
||||
# Always run from repo root
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Get the directory where the script is located
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
cd "$SCRIPT_DIR"
|
||||
|
||||
BACKEND_ADDR=${BACKEND_ADDR:-0.0.0.0:9119}
|
||||
DATA_DIR=${DATA_DIR:-./x-storage}
|
||||
|
||||
# Store PIDs for cleanup
|
||||
BACKEND_PID=""
|
||||
FRONTEND_PID=""
|
||||
|
||||
# Function to cleanup background processes
|
||||
cleanup() {
|
||||
echo ""
|
||||
echo "[dev] Shutting down..."
|
||||
|
||||
# Kill backend if running
|
||||
if [ -n "$BACKEND_PID" ] && kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
echo "[dev] Stopping backend (PID: $BACKEND_PID)..."
|
||||
kill "$BACKEND_PID" 2>/dev/null || true
|
||||
wait "$BACKEND_PID" 2>/dev/null || true
|
||||
echo -e "\n${YELLOW}Shutting down services...${NC}"
|
||||
if [ ! -z "$BACKEND_PID" ]; then
|
||||
kill $BACKEND_PID 2>/dev/null || true
|
||||
wait $BACKEND_PID 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Kill frontend if running
|
||||
if [ -n "$FRONTEND_PID" ] && kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
echo "[dev] Stopping frontend (PID: $FRONTEND_PID)..."
|
||||
kill "$FRONTEND_PID" 2>/dev/null || true
|
||||
wait "$FRONTEND_PID" 2>/dev/null || true
|
||||
if [ ! -z "$FRONTEND_PID" ]; then
|
||||
kill $FRONTEND_PID 2>/dev/null || true
|
||||
wait $FRONTEND_PID 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Kill any remaining background jobs
|
||||
if command -v xargs >/dev/null 2>&1; then
|
||||
jobs -p | xargs kill 2>/dev/null || true
|
||||
else
|
||||
# Fallback for systems without xargs -r
|
||||
for job in $(jobs -p); do
|
||||
kill "$job" 2>/dev/null || true
|
||||
done
|
||||
fi
|
||||
|
||||
# Wait a bit for graceful shutdown
|
||||
sleep 1
|
||||
|
||||
# Force kill if still running
|
||||
if [ -n "$BACKEND_PID" ] && kill -0 "$BACKEND_PID" 2>/dev/null; then
|
||||
kill -9 "$BACKEND_PID" 2>/dev/null || true
|
||||
fi
|
||||
if [ -n "$FRONTEND_PID" ] && kill -0 "$FRONTEND_PID" 2>/dev/null; then
|
||||
kill -9 "$FRONTEND_PID" 2>/dev/null || true
|
||||
fi
|
||||
|
||||
echo "[dev] Shutdown complete"
|
||||
echo -e "${GREEN}All services stopped.${NC}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
trap cleanup INT TERM EXIT
|
||||
# Trap Ctrl+C and cleanup
|
||||
trap cleanup SIGINT SIGTERM
|
||||
|
||||
# Start backend
|
||||
echo "[dev] Starting backend on $BACKEND_ADDR (data-dir=$DATA_DIR)"
|
||||
go run . --debug --address "$BACKEND_ADDR" --data-dir "$DATA_DIR" &
|
||||
# Check if pnpm is available, otherwise use npm
|
||||
if command -v pnpm &> /dev/null; then
|
||||
PACKAGE_MANAGER="pnpm"
|
||||
elif command -v npm &> /dev/null; then
|
||||
PACKAGE_MANAGER="npm"
|
||||
else
|
||||
echo -e "${RED}Error: Neither pnpm nor npm is installed.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if Go is installed
|
||||
if ! command -v go &> /dev/null; then
|
||||
echo -e "${RED}Error: Go is not installed.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}Starting backend (Go)...${NC}"
|
||||
go run main.go &
|
||||
BACKEND_PID=$!
|
||||
|
||||
# Wait a moment for backend to start
|
||||
sleep 2
|
||||
|
||||
# Start frontend
|
||||
echo "[dev] Starting frontend dev server (Vite)"
|
||||
(cd frontend && pnpm run dev) &
|
||||
echo -e "${GREEN}Starting frontend (Vite)...${NC}"
|
||||
cd frontend
|
||||
$PACKAGE_MANAGER run dev &
|
||||
FRONTEND_PID=$!
|
||||
cd ..
|
||||
|
||||
echo "[dev] Backend PID: $BACKEND_PID, Frontend PID: $FRONTEND_PID"
|
||||
echo "[dev] Press Ctrl+C to stop both..."
|
||||
echo -e "${GREEN}Both services are running!${NC}"
|
||||
echo -e "${YELLOW}Backend PID: $BACKEND_PID${NC}"
|
||||
echo -e "${YELLOW}Frontend PID: $FRONTEND_PID${NC}"
|
||||
echo -e "${YELLOW}Press Ctrl+C to stop both services${NC}"
|
||||
|
||||
# Wait for both processes to exit
|
||||
# Wait for both processes
|
||||
wait $BACKEND_PID $FRONTEND_PID
|
||||
@@ -1,5 +1,42 @@
|
||||
import { useEffect, useState } from 'react'
|
||||
import { Box, Typography, Paper, Table, TableBody, TableCell, TableContainer, TableHead, TableRow, CircularProgress, Alert } from '@mui/material'
|
||||
import { useEffect, useState, forwardRef } from 'react'
|
||||
import {
|
||||
Box,
|
||||
Typography,
|
||||
Paper,
|
||||
Table,
|
||||
TableBody,
|
||||
TableCell,
|
||||
TableContainer,
|
||||
TableHead,
|
||||
TableRow,
|
||||
CircularProgress,
|
||||
Alert,
|
||||
Button,
|
||||
Drawer,
|
||||
TextField,
|
||||
Stack,
|
||||
IconButton,
|
||||
Divider,
|
||||
Snackbar,
|
||||
Dialog,
|
||||
DialogTitle,
|
||||
DialogContent,
|
||||
DialogActions,
|
||||
LinearProgress,
|
||||
} from '@mui/material'
|
||||
import MuiAlert, { AlertProps } from '@mui/material/Alert'
|
||||
import DownloadIcon from '@mui/icons-material/Download'
|
||||
import UploadIcon from '@mui/icons-material/Upload'
|
||||
import SettingsIcon from '@mui/icons-material/Settings'
|
||||
import CloseIcon from '@mui/icons-material/Close'
|
||||
import { useSettingsStore } from '../stores/settingsStore'
|
||||
|
||||
const SnackbarAlert = forwardRef<HTMLDivElement, AlertProps>(function SnackbarAlert(
|
||||
props,
|
||||
ref,
|
||||
) {
|
||||
return <MuiAlert elevation={6} ref={ref} variant="filled" {...props} />
|
||||
})
|
||||
|
||||
interface RegistryImage {
|
||||
id: number
|
||||
@@ -17,10 +54,40 @@ function formatSize(bytes: number): string {
|
||||
return `${parseFloat((bytes / Math.pow(k, i)).toFixed(2))} ${sizes[i]}`
|
||||
}
|
||||
|
||||
// Remove registry address prefix from image name for display
|
||||
// e.g., "registry.loveuer.com/hub.yizhisec.com/external/busybox:1.37.0"
|
||||
// -> "hub.yizhisec.com/external/busybox:1.37.0"
|
||||
function getDisplayImageName(fullImageName: string): string {
|
||||
const firstSlashIndex = fullImageName.indexOf('/')
|
||||
if (firstSlashIndex > 0) {
|
||||
// Remove everything before the first slash (registry address)
|
||||
return fullImageName.substring(firstSlashIndex + 1)
|
||||
}
|
||||
// No slash found, return original name
|
||||
return fullImageName
|
||||
}
|
||||
|
||||
export default function RegistryImageList() {
|
||||
const [images, setImages] = useState<RegistryImage[]>([])
|
||||
const [loading, setLoading] = useState(true)
|
||||
const [error, setError] = useState<string | null>(null)
|
||||
const [settingsOpen, setSettingsOpen] = useState(false)
|
||||
const [uploadOpen, setUploadOpen] = useState(false)
|
||||
const [selectedFile, setSelectedFile] = useState<File | null>(null)
|
||||
const [uploading, setUploading] = useState(false)
|
||||
const [uploadProgress, setUploadProgress] = useState(0)
|
||||
const [downloadingImage, setDownloadingImage] = useState<string | null>(null)
|
||||
const {
|
||||
registryAddress,
|
||||
loading: configLoading,
|
||||
snackbar,
|
||||
fetchConfig,
|
||||
setRegistryAddress,
|
||||
hideSnackbar,
|
||||
showSnackbar
|
||||
} = useSettingsStore()
|
||||
const [registryAddressInput, setRegistryAddressInput] = useState(registryAddress)
|
||||
const [saving, setSaving] = useState(false)
|
||||
|
||||
useEffect(() => {
|
||||
let abort = false
|
||||
@@ -44,9 +111,169 @@ export default function RegistryImageList() {
|
||||
return () => { abort = true }
|
||||
}, [])
|
||||
|
||||
const handleDownload = async (imageName: string) => {
|
||||
setDownloadingImage(imageName)
|
||||
showSnackbar('正在准备下载,请稍候...', 'info')
|
||||
|
||||
try {
|
||||
// Use the image name directly as it's already the full repository name from database
|
||||
// The imageName already contains the full path like: registry.loveuer.com/hub.yizhisec.com/external/busybox:1.37.0
|
||||
// We should NOT modify it based on registry_address setting
|
||||
const fullImageName = imageName
|
||||
|
||||
showSnackbar('正在连接服务器...', 'info')
|
||||
const url = `/api/v1/registry/image/download/${encodeURIComponent(fullImageName)}`
|
||||
const response = await fetch(url)
|
||||
if (!response.ok) {
|
||||
throw new Error(`下载失败: ${response.statusText}`)
|
||||
}
|
||||
|
||||
showSnackbar('正在接收数据,请稍候...', 'info')
|
||||
|
||||
// Get filename from Content-Disposition header or use default
|
||||
const contentDisposition = response.headers.get('Content-Disposition')
|
||||
let filename = `${imageName.replace(/\//g, '_')}-latest.tar`
|
||||
if (contentDisposition) {
|
||||
const filenameMatch = contentDisposition.match(/filename="?(.+)"?/i)
|
||||
if (filenameMatch) {
|
||||
filename = filenameMatch[1]
|
||||
}
|
||||
}
|
||||
|
||||
// Create blob and download
|
||||
const blob = await response.blob()
|
||||
const downloadUrl = window.URL.createObjectURL(blob)
|
||||
const link = document.createElement('a')
|
||||
link.href = downloadUrl
|
||||
link.download = filename
|
||||
document.body.appendChild(link)
|
||||
link.click()
|
||||
document.body.removeChild(link)
|
||||
window.URL.revokeObjectURL(downloadUrl)
|
||||
|
||||
showSnackbar('下载完成!', 'success')
|
||||
} catch (error: unknown) {
|
||||
const err = error as Error
|
||||
showSnackbar(`下载失败: ${err.message}`, 'error')
|
||||
} finally {
|
||||
setDownloadingImage(null)
|
||||
}
|
||||
}
|
||||
|
||||
const handleSaveSettings = async () => {
|
||||
try {
|
||||
setSaving(true)
|
||||
await setRegistryAddress(registryAddressInput)
|
||||
setSettingsOpen(false)
|
||||
} catch (error: any) {
|
||||
// Error already handled in store
|
||||
} finally {
|
||||
setSaving(false)
|
||||
}
|
||||
}
|
||||
|
||||
const handleCloseSettings = () => {
|
||||
setRegistryAddressInput(registryAddress)
|
||||
setSettingsOpen(false)
|
||||
}
|
||||
|
||||
useEffect(() => {
|
||||
setRegistryAddressInput(registryAddress)
|
||||
}, [registryAddress])
|
||||
|
||||
useEffect(() => {
|
||||
fetchConfig()
|
||||
}, [fetchConfig])
|
||||
|
||||
const handleFileSelect = (event: React.ChangeEvent<HTMLInputElement>) => {
|
||||
const file = event.target.files?.[0]
|
||||
if (file && file.name.endsWith('.tar')) {
|
||||
setSelectedFile(file)
|
||||
} else {
|
||||
showSnackbar('请选择 .tar 文件', 'error')
|
||||
}
|
||||
}
|
||||
|
||||
const handleUpload = async () => {
|
||||
if (!selectedFile) {
|
||||
showSnackbar('请先选择文件', 'warning')
|
||||
return
|
||||
}
|
||||
|
||||
setUploading(true)
|
||||
setUploadProgress(0)
|
||||
|
||||
try {
|
||||
const formData = new FormData()
|
||||
formData.append('file', selectedFile)
|
||||
|
||||
const xhr = new XMLHttpRequest()
|
||||
|
||||
xhr.upload.addEventListener('progress', (e) => {
|
||||
if (e.lengthComputable) {
|
||||
const percentComplete = (e.loaded / e.total) * 100
|
||||
setUploadProgress(percentComplete)
|
||||
}
|
||||
})
|
||||
|
||||
xhr.addEventListener('load', () => {
|
||||
if (xhr.status === 200) {
|
||||
const result = JSON.parse(xhr.responseText)
|
||||
showSnackbar(`上传成功: ${result.data?.repository || ''}`, 'success')
|
||||
setUploadOpen(false)
|
||||
setSelectedFile(null)
|
||||
setUploadProgress(0)
|
||||
window.location.reload()
|
||||
} else {
|
||||
const result = JSON.parse(xhr.responseText)
|
||||
showSnackbar(`上传失败: ${result.msg || xhr.statusText}`, 'error')
|
||||
}
|
||||
setUploading(false)
|
||||
})
|
||||
|
||||
xhr.addEventListener('error', () => {
|
||||
showSnackbar('上传失败: 网络错误', 'error')
|
||||
setUploading(false)
|
||||
})
|
||||
|
||||
xhr.open('POST', '/api/v1/registry/image/upload')
|
||||
xhr.send(formData)
|
||||
} catch (error: unknown) {
|
||||
const err = error as Error
|
||||
showSnackbar(`上传失败: ${err.message}`, 'error')
|
||||
setUploading(false)
|
||||
}
|
||||
}
|
||||
|
||||
const handleCloseUpload = () => {
|
||||
if (!uploading) {
|
||||
setUploadOpen(false)
|
||||
setSelectedFile(null)
|
||||
setUploadProgress(0)
|
||||
}
|
||||
}
|
||||
|
||||
return (
|
||||
<Box>
|
||||
<Typography variant="h5" gutterBottom>镜像列表</Typography>
|
||||
<Box sx={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', mb: 2 }}>
|
||||
<Typography variant="h5">镜像列表</Typography>
|
||||
<Stack direction="row" spacing={2}>
|
||||
<Button
|
||||
variant="contained"
|
||||
startIcon={<UploadIcon />}
|
||||
onClick={() => setUploadOpen(true)}
|
||||
>
|
||||
上传镜像
|
||||
</Button>
|
||||
<Button
|
||||
variant="outlined"
|
||||
startIcon={<SettingsIcon />}
|
||||
onClick={() => setSettingsOpen(true)}
|
||||
>
|
||||
设置
|
||||
</Button>
|
||||
</Stack>
|
||||
</Box>
|
||||
{loading && <CircularProgress />}
|
||||
{error && <Alert severity="error">加载失败: {error}</Alert>}
|
||||
{!loading && !error && (
|
||||
@@ -59,20 +286,32 @@ export default function RegistryImageList() {
|
||||
<TableCell>名称</TableCell>
|
||||
<TableCell>上传时间</TableCell>
|
||||
<TableCell>大小</TableCell>
|
||||
<TableCell>操作</TableCell>
|
||||
</TableRow>
|
||||
</TableHead>
|
||||
<TableBody>
|
||||
{images.map(img => (
|
||||
<TableRow key={img.id} hover>
|
||||
<TableCell>{img.id}</TableCell>
|
||||
<TableCell>{img.name}</TableCell>
|
||||
<TableCell>{getDisplayImageName(img.name)}</TableCell>
|
||||
<TableCell>{img.upload_time}</TableCell>
|
||||
<TableCell>{formatSize(img.size)}</TableCell>
|
||||
<TableCell>
|
||||
<Button
|
||||
variant="outlined"
|
||||
size="small"
|
||||
startIcon={downloadingImage === img.name ? <CircularProgress size={16} /> : <DownloadIcon />}
|
||||
onClick={() => handleDownload(img.name)}
|
||||
disabled={downloadingImage === img.name}
|
||||
>
|
||||
{downloadingImage === img.name ? '下载中...' : '下载'}
|
||||
</Button>
|
||||
</TableCell>
|
||||
</TableRow>
|
||||
))}
|
||||
{images.length === 0 && (
|
||||
<TableRow>
|
||||
<TableCell colSpan={4} align="center">暂无镜像</TableCell>
|
||||
<TableCell colSpan={5} align="center">暂无镜像</TableCell>
|
||||
</TableRow>
|
||||
)}
|
||||
</TableBody>
|
||||
@@ -80,6 +319,109 @@ export default function RegistryImageList() {
|
||||
</TableContainer>
|
||||
</Paper>
|
||||
)}
|
||||
|
||||
{/* Upload Dialog */}
|
||||
<Dialog open={uploadOpen} onClose={handleCloseUpload} maxWidth="sm" fullWidth>
|
||||
<DialogTitle>上传镜像文件</DialogTitle>
|
||||
<DialogContent>
|
||||
<Stack spacing={3} sx={{ mt: 2 }}>
|
||||
<Alert severity="info">
|
||||
请上传 Docker 镜像 tar 文件(使用 docker save 导出的文件)
|
||||
</Alert>
|
||||
<Button
|
||||
variant="outlined"
|
||||
component="label"
|
||||
fullWidth
|
||||
disabled={uploading}
|
||||
>
|
||||
选择文件
|
||||
<input
|
||||
type="file"
|
||||
accept=".tar"
|
||||
hidden
|
||||
onChange={handleFileSelect}
|
||||
/>
|
||||
</Button>
|
||||
{selectedFile && (
|
||||
<Typography variant="body2" color="text.secondary">
|
||||
已选择: {selectedFile.name} ({formatSize(selectedFile.size)})
|
||||
</Typography>
|
||||
)}
|
||||
{uploading && (
|
||||
<Box>
|
||||
<LinearProgress variant="determinate" value={uploadProgress} />
|
||||
<Typography variant="body2" color="text.secondary" sx={{ mt: 1 }}>
|
||||
上传中: {uploadProgress.toFixed(0)}%
|
||||
</Typography>
|
||||
</Box>
|
||||
)}
|
||||
</Stack>
|
||||
</DialogContent>
|
||||
<DialogActions>
|
||||
<Button onClick={handleCloseUpload} disabled={uploading}>
|
||||
取消
|
||||
</Button>
|
||||
<Button
|
||||
onClick={handleUpload}
|
||||
variant="contained"
|
||||
disabled={!selectedFile || uploading}
|
||||
>
|
||||
{uploading ? '上传中...' : '开始上传'}
|
||||
</Button>
|
||||
</DialogActions>
|
||||
</Dialog>
|
||||
|
||||
{/* Settings Drawer */}
|
||||
<Drawer
|
||||
anchor="right"
|
||||
open={settingsOpen}
|
||||
onClose={handleCloseSettings}
|
||||
PaperProps={{
|
||||
sx: { width: 400, p: 3 }
|
||||
}}
|
||||
>
|
||||
<Box sx={{ display: 'flex', justifyContent: 'space-between', alignItems: 'center', mb: 3 }}>
|
||||
<Typography variant="h6">镜像设置</Typography>
|
||||
<IconButton onClick={handleCloseSettings}>
|
||||
<CloseIcon />
|
||||
</IconButton>
|
||||
</Box>
|
||||
<Divider sx={{ mb: 3 }} />
|
||||
|
||||
<Stack spacing={3}>
|
||||
<TextField
|
||||
label="Registry Address"
|
||||
placeholder="例如: my-registry.com"
|
||||
value={registryAddressInput}
|
||||
onChange={(e) => setRegistryAddressInput(e.target.value)}
|
||||
fullWidth
|
||||
helperText="设置的 registry address 会作用于下载的镜像名称"
|
||||
variant="outlined"
|
||||
disabled={configLoading || saving}
|
||||
/>
|
||||
|
||||
<Box sx={{ display: 'flex', gap: 2, justifyContent: 'flex-end' }}>
|
||||
<Button variant="outlined" onClick={handleCloseSettings} disabled={saving}>
|
||||
取消
|
||||
</Button>
|
||||
<Button variant="contained" onClick={handleSaveSettings} disabled={saving || configLoading}>
|
||||
{saving ? '保存中...' : '保存'}
|
||||
</Button>
|
||||
</Box>
|
||||
</Stack>
|
||||
</Drawer>
|
||||
|
||||
{/* Snackbar for notifications */}
|
||||
<Snackbar
|
||||
open={snackbar.open}
|
||||
autoHideDuration={3000}
|
||||
onClose={hideSnackbar}
|
||||
anchorOrigin={{ vertical: 'top', horizontal: 'center' }}
|
||||
>
|
||||
<SnackbarAlert onClose={hideSnackbar} severity={snackbar.severity}>
|
||||
{snackbar.message}
|
||||
</SnackbarAlert>
|
||||
</Snackbar>
|
||||
</Box>
|
||||
)
|
||||
}
|
||||
|
||||
88
frontend/src/stores/settingsStore.ts
Normal file
88
frontend/src/stores/settingsStore.ts
Normal file
@@ -0,0 +1,88 @@
|
||||
import { create } from 'zustand'
|
||||
|
||||
interface SnackbarMessage {
|
||||
open: boolean
|
||||
message: string
|
||||
severity: 'success' | 'error' | 'info' | 'warning'
|
||||
}
|
||||
|
||||
interface SettingsState {
|
||||
registryAddress: string
|
||||
loading: boolean
|
||||
error: string | null
|
||||
snackbar: SnackbarMessage
|
||||
fetchConfig: () => Promise<void>
|
||||
setRegistryAddress: (address: string) => Promise<void>
|
||||
showSnackbar: (message: string, severity: SnackbarMessage['severity']) => void
|
||||
hideSnackbar: () => void
|
||||
}
|
||||
|
||||
export const useSettingsStore = create<SettingsState>((set, get) => ({
|
||||
registryAddress: '',
|
||||
loading: false,
|
||||
error: null,
|
||||
snackbar: {
|
||||
open: false,
|
||||
message: '',
|
||||
severity: 'info'
|
||||
},
|
||||
|
||||
fetchConfig: async () => {
|
||||
set({ loading: true, error: null })
|
||||
try {
|
||||
const res = await fetch('/api/v1/registry/config')
|
||||
if (!res.ok) throw new Error(`HTTP ${res.status}`)
|
||||
const result = await res.json()
|
||||
const configs = result.data?.configs || {}
|
||||
set({
|
||||
registryAddress: configs.registry_address || '',
|
||||
loading: false
|
||||
})
|
||||
} catch (error: any) {
|
||||
set({ error: error.message, loading: false })
|
||||
}
|
||||
},
|
||||
|
||||
setRegistryAddress: async (address: string) => {
|
||||
set({ loading: true, error: null })
|
||||
try {
|
||||
const res = await fetch('/api/v1/registry/config', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
key: 'registry_address',
|
||||
value: address,
|
||||
}),
|
||||
})
|
||||
if (!res.ok) throw new Error(`HTTP ${res.status}`)
|
||||
set({ registryAddress: address, loading: false })
|
||||
get().showSnackbar('保存成功', 'success')
|
||||
} catch (error: any) {
|
||||
set({ error: error.message, loading: false })
|
||||
get().showSnackbar(`保存失败: ${error.message}`, 'error')
|
||||
throw error
|
||||
}
|
||||
},
|
||||
|
||||
showSnackbar: (message: string, severity: SnackbarMessage['severity']) => {
|
||||
set({
|
||||
snackbar: {
|
||||
open: true,
|
||||
message,
|
||||
severity
|
||||
}
|
||||
})
|
||||
},
|
||||
|
||||
hideSnackbar: () => {
|
||||
set({
|
||||
snackbar: {
|
||||
open: false,
|
||||
message: '',
|
||||
severity: 'info'
|
||||
}
|
||||
})
|
||||
},
|
||||
}))
|
||||
5
go.mod
5
go.mod
@@ -5,7 +5,10 @@ go 1.25.0
|
||||
require (
|
||||
github.com/glebarez/sqlite v1.11.0
|
||||
github.com/gofiber/fiber/v3 v3.0.0-beta.2
|
||||
github.com/gofrs/uuid v4.4.0+incompatible
|
||||
github.com/jedib0t/go-pretty/v6 v6.7.1
|
||||
github.com/spf13/cobra v1.10.1
|
||||
golang.org/x/crypto v0.41.0
|
||||
gorm.io/gorm v1.31.1
|
||||
)
|
||||
|
||||
@@ -21,7 +24,9 @@ require (
|
||||
github.com/klauspost/compress v1.18.0 // indirect
|
||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
||||
github.com/mattn/go-isatty v0.0.20 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.16 // indirect
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
|
||||
github.com/rivo/uniseg v0.4.7 // indirect
|
||||
github.com/spf13/pflag v1.0.9 // indirect
|
||||
github.com/stretchr/testify v1.11.1 // indirect
|
||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||
|
||||
15
go.sum
15
go.sum
@@ -15,12 +15,16 @@ github.com/gofiber/fiber/v3 v3.0.0-beta.2 h1:mVVgt8PTaHGup3NGl/+7U7nEoZaXJ5OComV
|
||||
github.com/gofiber/fiber/v3 v3.0.0-beta.2/go.mod h1:w7sdfTY0okjZ1oVH6rSOGvuACUIt0By1iK0HKUb3uqM=
|
||||
github.com/gofiber/utils/v2 v2.0.0-rc.1 h1:b77K5Rk9+Pjdxz4HlwEBnS7u5nikhx7armQB8xPds4s=
|
||||
github.com/gofiber/utils/v2 v2.0.0-rc.1/go.mod h1:Y1g08g7gvST49bbjHJ1AVqcsmg93912R/tbKWhn6V3E=
|
||||
github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26 h1:Xim43kblpZXfIBQsbuBVKCudVG457BR2GZFIz3uw3hQ=
|
||||
github.com/google/pprof v0.0.0-20221118152302-e6195bd50e26/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
|
||||
github.com/gofrs/uuid v4.4.0+incompatible h1:3qXRTX8/NbyulANqlc0lchS1gqAVxRgsuW1YrTJupqA=
|
||||
github.com/gofrs/uuid v4.4.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
|
||||
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7 h1:y3N7Bm7Y9/CtpiVkw/ZWj6lSlDF3F74SfKwfTCer72Q=
|
||||
github.com/google/pprof v0.0.0-20240227163752-401108e1b7e7/go.mod h1:czg5+yv1E0ZGTi6S6vVK1mke0fV+FaUhNGcd6VRS9Ik=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
|
||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||
github.com/jedib0t/go-pretty/v6 v6.7.1 h1:bHDSsj93NuJ563hHuM7ohk/wpX7BmRFNIsVv1ssI2/M=
|
||||
github.com/jedib0t/go-pretty/v6 v6.7.1/go.mod h1:YwC5CE4fJ1HFUDeivSV1r//AmANFHyqczZk+U6BDALU=
|
||||
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
|
||||
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
|
||||
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
|
||||
@@ -31,11 +35,16 @@ github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHP
|
||||
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
|
||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
|
||||
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20200410134404-eec4a21b6bb0/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec h1:W09IVJc94icq4NjY3clb7Lk8O1qJ8BdBEF8z0ibU0rE=
|
||||
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||
github.com/shamaton/msgpack/v2 v2.2.3 h1:uDOHmxQySlvlUYfQwdjxyybAOzjlQsD1Vjy+4jmO9NM=
|
||||
github.com/shamaton/msgpack/v2 v2.2.3/go.mod h1:6khjYnkx73f7VQU7wjcFS9DFjs+59naVWJv1TB7qdOI=
|
||||
@@ -53,6 +62,8 @@ github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
|
||||
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
|
||||
github.com/xyproto/randomstring v1.0.5 h1:YtlWPoRdgMu3NZtP45drfy1GKoojuR7hmRcnhZqKjWU=
|
||||
github.com/xyproto/randomstring v1.0.5/go.mod h1:rgmS5DeNXLivK7YprL0pY+lTuhNQW3iGxZ18UQApw/E=
|
||||
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
||||
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
|
||||
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||
|
||||
@@ -7,19 +7,21 @@ import (
|
||||
"net"
|
||||
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/middleware"
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/model"
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/module/registry"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/store"
|
||||
"github.com/gofiber/fiber/v3"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func Init(ctx context.Context, address string, db *gorm.DB, store store.Store) error {
|
||||
func Init(ctx context.Context, address string, db *gorm.DB, store store.Store) (func(context.Context) error, error) {
|
||||
var (
|
||||
err error
|
||||
ln net.Listener
|
||||
cfg = fiber.Config{
|
||||
BodyLimit: 1024 * 1024 * 1024 * 10, // 10GB limit for large image layers
|
||||
}
|
||||
fn func(context.Context) error
|
||||
)
|
||||
|
||||
app := fiber.New(cfg)
|
||||
@@ -28,6 +30,12 @@ func Init(ctx context.Context, address string, db *gorm.DB, store store.Store) e
|
||||
app.Use(middleware.Recovery())
|
||||
app.Use(middleware.CORS())
|
||||
|
||||
// Ensure database migration for RegistryConfig
|
||||
// This is done here to ensure the table exists before config APIs are called
|
||||
if err := db.AutoMigrate(&model.RegistryConfig{}); err != nil {
|
||||
log.Printf("Warning: failed to migrate RegistryConfig: %v", err)
|
||||
}
|
||||
|
||||
// oci image apis
|
||||
{
|
||||
app.All("/v2/*", registry.Registry(ctx, db, store))
|
||||
@@ -37,25 +45,32 @@ func Init(ctx context.Context, address string, db *gorm.DB, store store.Store) e
|
||||
{
|
||||
registryAPI := app.Group("/api/v1/registry")
|
||||
registryAPI.Get("/image/list", registry.RegistryImageList(ctx, db, store))
|
||||
registryAPI.Get("/image/download/*", registry.RegistryImageDownload(ctx, db, store))
|
||||
registryAPI.Post("/image/upload", registry.RegistryImageUpload(ctx, db, store))
|
||||
// registry config apis
|
||||
registryAPI.Get("/config", registry.RegistryConfigGet(ctx, db, store))
|
||||
registryAPI.Post("/config", registry.RegistryConfigSet(ctx, db, store))
|
||||
}
|
||||
|
||||
ln, err = net.Listen("tcp", address)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to listen on %s: %w", address, err)
|
||||
return fn, fmt.Errorf("failed to listen on %s: %w", address, err)
|
||||
}
|
||||
|
||||
go func() {
|
||||
if err := app.Listener(ln); err != nil {
|
||||
if err = app.Listener(ln); err != nil {
|
||||
log.Fatalf("Fiber server failed on %s: %v", address, err)
|
||||
}
|
||||
}()
|
||||
|
||||
go func() {
|
||||
<-ctx.Done()
|
||||
if err := app.Shutdown(); err != nil {
|
||||
log.Fatalf("Failed to shutdown: %v", err)
|
||||
fn = func(_ctx context.Context) error {
|
||||
log.Println("[W] service shutdown...")
|
||||
if err = app.ShutdownWithContext(_ctx); err != nil {
|
||||
return fmt.Errorf("[E] service shutdown failed, err = %w", err)
|
||||
}
|
||||
}()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
return fn, nil
|
||||
}
|
||||
|
||||
@@ -2,11 +2,13 @@ package cmd
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/api"
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/opt"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/database/db"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/store"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/tool"
|
||||
"github.com/spf13/cobra"
|
||||
)
|
||||
|
||||
@@ -17,6 +19,8 @@ func Run(ctx context.Context) error {
|
||||
RunE: func(cmd *cobra.Command, args []string) error {
|
||||
var (
|
||||
err error
|
||||
stopFns = []func(context.Context) error{}
|
||||
stopApi func(context.Context) error
|
||||
)
|
||||
|
||||
if err = opt.Init(cmd.Context()); err != nil {
|
||||
@@ -31,11 +35,16 @@ func Run(ctx context.Context) error {
|
||||
return err
|
||||
}
|
||||
|
||||
if err = api.Init(cmd.Context(), opt.GlobalAddress, db.Default, store.Default); err != nil {
|
||||
if stopApi, err = api.Init(cmd.Context(), opt.GlobalAddress, db.Default, store.Default); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
stopFns = append(stopFns, stopApi)
|
||||
|
||||
<-cmd.Context().Done()
|
||||
log.Println("[W] 收到退出信号,开始退出...")
|
||||
|
||||
tool.MustStop(tool.Timeout(5), stopFns...)
|
||||
|
||||
return nil
|
||||
},
|
||||
@@ -45,5 +54,5 @@ func Run(ctx context.Context) error {
|
||||
_cmd.PersistentFlags().StringVarP(&opt.GlobalAddress, "address", "A", "0.0.0.0:9119", "API server listen address")
|
||||
_cmd.PersistentFlags().StringVarP(&opt.GlobalDataDir, "data-dir", "D", "./x-storage", "Data directory for storing all data")
|
||||
|
||||
return _cmd.Execute()
|
||||
return _cmd.ExecuteContext(ctx)
|
||||
}
|
||||
|
||||
@@ -68,3 +68,14 @@ type BlobUpload struct {
|
||||
Path string `gorm:"not null" json:"path"` // ??????
|
||||
Size int64 `gorm:"default:0" json:"size"` // ?????
|
||||
}
|
||||
|
||||
// RegistryConfig registry ?????
|
||||
type RegistryConfig struct {
|
||||
ID uint `gorm:"primarykey" json:"id"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
DeletedAt gorm.DeletedAt `gorm:"index" json:"-"`
|
||||
|
||||
Key string `gorm:"uniqueIndex;not null" json:"key"` // ???? key
|
||||
Value string `gorm:"type:text" json:"value"` // ???? value
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
@@ -48,6 +49,16 @@ func HandleBlobs(c fiber.Ctx, db *gorm.DB, store store.Store) error {
|
||||
|
||||
// ???? blobs ???????
|
||||
repo := strings.Join(parts[:blobsIndex], "/")
|
||||
|
||||
// Strip registry_address prefix from repo if present
|
||||
var registryConfig model.RegistryConfig
|
||||
registryAddress := ""
|
||||
if err := db.Where("key = ?", "registry_address").First(®istryConfig).Error; err == nil {
|
||||
registryAddress = registryConfig.Value
|
||||
}
|
||||
if registryAddress != "" && strings.HasPrefix(repo, registryAddress+"/") {
|
||||
repo = strings.TrimPrefix(repo, registryAddress+"/")
|
||||
}
|
||||
// ???? parts??????????? parts[0] ? "blobs"
|
||||
parts = parts[blobsIndex:]
|
||||
|
||||
@@ -285,42 +296,53 @@ func parseRangeHeader(rangeHeader string, size int64) (start, end int64, valid b
|
||||
|
||||
// handleBlobDownload ?? blob
|
||||
func handleBlobDownload(c fiber.Ctx, db *gorm.DB, store store.Store, repo string, digest string) error {
|
||||
log.Printf("[BlobDownload] Start: repo=%s, digest=%s", repo, digest)
|
||||
|
||||
// Check if blob exists
|
||||
exists, err := store.BlobExists(c.Context(), digest)
|
||||
if err != nil {
|
||||
log.Printf("[BlobDownload] BlobExists error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
if !exists {
|
||||
log.Printf("[BlobDownload] Blob not found: %s", digest)
|
||||
return resp.R404(c, "BLOB_NOT_FOUND", nil, "blob not found")
|
||||
}
|
||||
|
||||
// Get blob size
|
||||
size, err := store.GetBlobSize(c.Context(), digest)
|
||||
if err != nil {
|
||||
log.Printf("[BlobDownload] GetBlobSize error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
log.Printf("[BlobDownload] Blob size: %d bytes", size)
|
||||
|
||||
// Read blob
|
||||
reader, err := store.ReadBlob(c.Context(), digest)
|
||||
if err != nil {
|
||||
log.Printf("[BlobDownload] ReadBlob error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
defer reader.Close()
|
||||
log.Printf("[BlobDownload] Reader opened successfully")
|
||||
|
||||
// Check for Range request
|
||||
rangeHeader := c.Get("Range")
|
||||
start, end, hasRange := parseRangeHeader(rangeHeader, size)
|
||||
|
||||
if hasRange {
|
||||
log.Printf("[BlobDownload] Range request: %d-%d/%d", start, end, size)
|
||||
// Handle Range request
|
||||
// Seek to start position
|
||||
if seeker, ok := reader.(io.Seeker); ok {
|
||||
if _, err := seeker.Seek(start, io.SeekStart); err != nil {
|
||||
log.Printf("[BlobDownload] Seek error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
} else {
|
||||
// If not seekable, read and discard bytes
|
||||
if _, err := io.CopyN(io.Discard, reader, start); err != nil {
|
||||
log.Printf("[BlobDownload] CopyN discard error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
}
|
||||
@@ -336,18 +358,32 @@ func handleBlobDownload(c fiber.Ctx, db *gorm.DB, store store.Store, repo string
|
||||
c.Set("Docker-Content-Digest", digest)
|
||||
c.Status(206) // Partial Content
|
||||
|
||||
// Send partial content
|
||||
return c.SendStream(limitedReader)
|
||||
log.Printf("[BlobDownload] Sending partial content")
|
||||
// Read all content and send
|
||||
content, err := io.ReadAll(limitedReader)
|
||||
if err != nil {
|
||||
log.Printf("[BlobDownload] ReadAll error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
return c.Send(content)
|
||||
}
|
||||
|
||||
// Full blob download
|
||||
log.Printf("[BlobDownload] Full blob download, setting headers")
|
||||
c.Set("Content-Type", "application/octet-stream")
|
||||
c.Set("Content-Length", fmt.Sprintf("%d", size))
|
||||
c.Set("Accept-Ranges", "bytes")
|
||||
c.Set("Docker-Content-Digest", digest)
|
||||
|
||||
// Send full blob stream
|
||||
return c.SendStream(reader)
|
||||
log.Printf("[BlobDownload] About to read all content, size=%d", size)
|
||||
// Read all content and send
|
||||
content, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
log.Printf("[BlobDownload] ReadAll error: %v", err)
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
log.Printf("[BlobDownload] Read %d bytes, sending...", len(content))
|
||||
return c.Send(content)
|
||||
}
|
||||
|
||||
// handleBlobHead ?? blob ????
|
||||
|
||||
81
internal/module/registry/handler.config.go
Normal file
81
internal/module/registry/handler.config.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package registry
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/model"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/resp"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/store"
|
||||
"github.com/gofiber/fiber/v3"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// RegistryConfigGet returns the registry configuration
|
||||
func RegistryConfigGet(ctx context.Context, db *gorm.DB, store store.Store) fiber.Handler {
|
||||
return func(c fiber.Ctx) error {
|
||||
var configs []model.RegistryConfig
|
||||
if err := db.Find(&configs).Error; err != nil {
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
|
||||
// Convert to map for easier frontend access
|
||||
configMap := make(map[string]string)
|
||||
for _, config := range configs {
|
||||
configMap[config.Key] = config.Value
|
||||
}
|
||||
|
||||
return resp.R200(c, map[string]interface{}{
|
||||
"configs": configMap,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// RegistryConfigSet sets a registry configuration value
|
||||
func RegistryConfigSet(ctx context.Context, db *gorm.DB, store store.Store) fiber.Handler {
|
||||
return func(c fiber.Ctx) error {
|
||||
var req struct {
|
||||
Key string `json:"key"`
|
||||
Value string `json:"value"`
|
||||
}
|
||||
|
||||
// Parse JSON body
|
||||
body := c.Body()
|
||||
if len(body) == 0 {
|
||||
return resp.R400(c, "EMPTY_BODY", nil, "request body is empty")
|
||||
}
|
||||
if err := json.Unmarshal(body, &req); err != nil {
|
||||
return resp.R400(c, "INVALID_REQUEST", nil, "invalid request body")
|
||||
}
|
||||
|
||||
if req.Key == "" {
|
||||
return resp.R400(c, "MISSING_KEY", nil, "key is required")
|
||||
}
|
||||
|
||||
// Find or create config
|
||||
var config model.RegistryConfig
|
||||
err := db.Where("key = ?", req.Key).First(&config).Error
|
||||
if err == gorm.ErrRecordNotFound {
|
||||
// Create new config
|
||||
config = model.RegistryConfig{
|
||||
Key: req.Key,
|
||||
Value: req.Value,
|
||||
}
|
||||
if err := db.Create(&config).Error; err != nil {
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
} else if err != nil {
|
||||
return resp.R500(c, "", nil, err)
|
||||
} else {
|
||||
// Update existing config
|
||||
config.Value = req.Value
|
||||
if err := db.Save(&config).Error; err != nil {
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
}
|
||||
|
||||
return resp.R200(c, map[string]interface{}{
|
||||
"config": config,
|
||||
})
|
||||
}
|
||||
}
|
||||
370
internal/module/registry/handler.download.go
Normal file
370
internal/module/registry/handler.download.go
Normal file
@@ -0,0 +1,370 @@
|
||||
package registry
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bufio"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/model"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/resp"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/store"
|
||||
"github.com/gofiber/fiber/v3"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// RegistryImageDownload downloads an image as a tar file
|
||||
func RegistryImageDownload(ctx context.Context, db *gorm.DB, store store.Store) fiber.Handler {
|
||||
return func(c fiber.Ctx) error {
|
||||
startTime := time.Now()
|
||||
|
||||
// Get image name from wildcard parameter (Fiber automatically decodes URL)
|
||||
fullImageName := c.Params("*")
|
||||
if fullImageName == "" {
|
||||
return resp.R400(c, "MISSING_IMAGE_NAME", nil, "image name is required")
|
||||
}
|
||||
|
||||
// Additional URL decode in case Fiber didn't decode it
|
||||
decodedImageName, err := url.PathUnescape(fullImageName)
|
||||
if err == nil {
|
||||
fullImageName = decodedImageName
|
||||
}
|
||||
|
||||
log.Printf("[Download] Start downloading: %s", fullImageName)
|
||||
|
||||
// Get current registry_address to strip it from the request
|
||||
var registryConfig model.RegistryConfig
|
||||
registryAddress := ""
|
||||
if err := db.Where("key = ?", "registry_address").First(®istryConfig).Error; err == nil {
|
||||
registryAddress = registryConfig.Value
|
||||
}
|
||||
if registryAddress == "" {
|
||||
registryAddress = "localhost:9119"
|
||||
}
|
||||
|
||||
// Strip registry_address prefix if present
|
||||
// e.g., "test.com/docker.io/redis:latest" -> "docker.io/redis:latest"
|
||||
imageName := fullImageName
|
||||
if strings.HasPrefix(imageName, registryAddress+"/") {
|
||||
imageName = strings.TrimPrefix(imageName, registryAddress+"/")
|
||||
}
|
||||
|
||||
// Parse image name (repository:tag) to extract repository and tag
|
||||
var repository, tag string
|
||||
if strings.Contains(imageName, ":") {
|
||||
parts := strings.SplitN(imageName, ":", 2)
|
||||
repository = parts[0]
|
||||
tag = parts[1]
|
||||
} else {
|
||||
// If no tag specified, default to "latest"
|
||||
repository = imageName
|
||||
tag = "latest"
|
||||
}
|
||||
|
||||
log.Printf("[Download] Parsed - repository: %s, tag: %s", repository, tag)
|
||||
|
||||
// Find the repository
|
||||
t1 := time.Now()
|
||||
var repositoryModel model.Repository
|
||||
if err := db.Where("name = ?", repository).First(&repositoryModel).Error; err != nil {
|
||||
if err == gorm.ErrRecordNotFound {
|
||||
return resp.R404(c, "IMAGE_NOT_FOUND", nil, fmt.Sprintf("image %s not found", repository))
|
||||
}
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
log.Printf("[Download] DB query repository: %v", time.Since(t1))
|
||||
|
||||
// Find the tag record
|
||||
t2 := time.Now()
|
||||
var tagRecord model.Tag
|
||||
if err := db.Where("repository = ? AND tag = ?", repository, tag).First(&tagRecord).Error; err != nil {
|
||||
if err == gorm.ErrRecordNotFound {
|
||||
// Try to get the first available tag
|
||||
if err := db.Where("repository = ?", repository).First(&tagRecord).Error; err != nil {
|
||||
if err == gorm.ErrRecordNotFound {
|
||||
return resp.R404(c, "TAG_NOT_FOUND", nil, fmt.Sprintf("no tag found for image %s", repository))
|
||||
}
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
// Update tag to the found tag
|
||||
tag = tagRecord.Tag
|
||||
} else {
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
}
|
||||
log.Printf("[Download] DB query tag: %v", time.Since(t2))
|
||||
|
||||
// Get the manifest
|
||||
t3 := time.Now()
|
||||
var manifest model.Manifest
|
||||
if err := db.Where("digest = ?", tagRecord.Digest).First(&manifest).Error; err != nil {
|
||||
if err == gorm.ErrRecordNotFound {
|
||||
return resp.R404(c, "MANIFEST_NOT_FOUND", nil, "manifest not found")
|
||||
}
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
log.Printf("[Download] DB query manifest: %v", time.Since(t3))
|
||||
|
||||
// Read manifest content - try from database first, then from store
|
||||
t4 := time.Now()
|
||||
var manifestContent []byte
|
||||
if len(manifest.Content) > 0 {
|
||||
manifestContent = manifest.Content
|
||||
} else {
|
||||
var err error
|
||||
manifestContent, err = store.ReadManifest(c.Context(), manifest.Digest)
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, err)
|
||||
}
|
||||
}
|
||||
log.Printf("[Download] Read manifest content: %v", time.Since(t4))
|
||||
|
||||
// Parse manifest to extract layer digests
|
||||
t5 := time.Now()
|
||||
var manifestData map[string]interface{}
|
||||
if err := json.Unmarshal(manifestContent, &manifestData); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to parse manifest: %w", err))
|
||||
}
|
||||
|
||||
// Debug: check manifest keys
|
||||
manifestKeys := make([]string, 0, len(manifestData))
|
||||
for k := range manifestData {
|
||||
manifestKeys = append(manifestKeys, k)
|
||||
}
|
||||
if _, ok := manifestData["layers"]; !ok {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("manifest keys: %v, no layers key found", manifestKeys))
|
||||
}
|
||||
|
||||
// Extract layers from manifest
|
||||
layers, err := extractLayers(manifestData)
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to extract layers: %w", err))
|
||||
}
|
||||
if len(layers) == 0 {
|
||||
// Debug: check if layers key exists
|
||||
if layersValue, ok := manifestData["layers"]; ok {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("no layers found in manifest, but layers key exists: %T", layersValue))
|
||||
}
|
||||
return resp.R500(c, "", nil, fmt.Errorf("no layers found in manifest"))
|
||||
}
|
||||
log.Printf("[Download] Parse manifest and extract %d layers: %v", len(layers), time.Since(t5))
|
||||
log.Printf("[Download] Preparation completed in %v, starting tar generation", time.Since(startTime))
|
||||
|
||||
// Set response headers for file download
|
||||
filename := fmt.Sprintf("%s-%s.tar", strings.ReplaceAll(repository, "/", "_"), tag)
|
||||
c.Set("Content-Type", "application/x-tar")
|
||||
c.Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", filename))
|
||||
|
||||
// Create a pipe for streaming tar content
|
||||
pr, pw := io.Pipe()
|
||||
|
||||
// Use buffered writer for better performance
|
||||
bufWriter := bufio.NewWriterSize(pw, 1024*1024) // 1MB buffer
|
||||
tarWriter := tar.NewWriter(bufWriter)
|
||||
|
||||
// Write tar content in a goroutine
|
||||
go func() {
|
||||
defer pw.Close()
|
||||
defer tarWriter.Close()
|
||||
defer bufWriter.Flush()
|
||||
|
||||
// Get config digest from manifest
|
||||
configDigest := ""
|
||||
if configValue, ok := manifestData["config"].(map[string]interface{}); ok {
|
||||
if digest, ok := configValue["digest"].(string); ok {
|
||||
configDigest = digest
|
||||
}
|
||||
}
|
||||
|
||||
// Build Docker save format manifest.json (array format)
|
||||
manifestItems := []map[string]interface{}{
|
||||
{
|
||||
"Config": strings.TrimPrefix(configDigest, "sha256:") + ".json",
|
||||
"RepoTags": []string{repository + ":" + tag},
|
||||
"Layers": make([]string, 0, len(layers)),
|
||||
},
|
||||
}
|
||||
|
||||
// Add layer paths to manifest
|
||||
for _, layerDigest := range layers {
|
||||
layerPath := strings.TrimPrefix(layerDigest, "sha256:") + "/layer"
|
||||
manifestItems[0]["Layers"] = append(manifestItems[0]["Layers"].([]string), layerPath)
|
||||
}
|
||||
|
||||
// Convert manifest to JSON
|
||||
manifestJSON, err := json.Marshal(manifestItems)
|
||||
if err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to marshal manifest: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
// Write manifest.json (Docker save format)
|
||||
if err := writeTarFile(tarWriter, "manifest.json", manifestJSON); err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to write manifest: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
// Write repositories file (Docker save format)
|
||||
repositoriesMap := map[string]map[string]string{
|
||||
repository: {
|
||||
tag: strings.TrimPrefix(tagRecord.Digest, "sha256:"),
|
||||
},
|
||||
}
|
||||
repositoriesJSON, err := json.Marshal(repositoriesMap)
|
||||
if err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to marshal repositories: %w", err))
|
||||
return
|
||||
}
|
||||
if err := writeTarFile(tarWriter, "repositories", repositoriesJSON); err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to write repositories: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
// Write config file if exists
|
||||
if configDigest != "" {
|
||||
configReader, err := store.ReadBlob(c.Context(), configDigest)
|
||||
if err == nil {
|
||||
configFileName := strings.TrimPrefix(configDigest, "sha256:") + ".json"
|
||||
if err := writeTarFileStream(tarWriter, configFileName, configReader); err != nil {
|
||||
configReader.Close()
|
||||
pw.CloseWithError(fmt.Errorf("failed to write config: %w", err))
|
||||
return
|
||||
}
|
||||
configReader.Close()
|
||||
}
|
||||
}
|
||||
|
||||
// Write all layer blobs in Docker save format (digest/layer)
|
||||
for _, layerDigest := range layers {
|
||||
// Read blob
|
||||
blobReader, err := store.ReadBlob(c.Context(), layerDigest)
|
||||
if err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to read blob %s: %w", layerDigest, err))
|
||||
return
|
||||
}
|
||||
|
||||
// Write blob to tar in Docker save format: {digest}/layer
|
||||
digestOnly := strings.TrimPrefix(layerDigest, "sha256:")
|
||||
layerPath := digestOnly + "/layer"
|
||||
|
||||
if err := writeTarFileStream(tarWriter, layerPath, blobReader); err != nil {
|
||||
blobReader.Close()
|
||||
pw.CloseWithError(fmt.Errorf("failed to write blob: %w", err))
|
||||
return
|
||||
}
|
||||
blobReader.Close()
|
||||
}
|
||||
|
||||
// Close tar writer and pipe
|
||||
if err := tarWriter.Close(); err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to close tar writer: %w", err))
|
||||
return
|
||||
}
|
||||
|
||||
if err := bufWriter.Flush(); err != nil {
|
||||
pw.CloseWithError(fmt.Errorf("failed to flush buffer: %w", err))
|
||||
return
|
||||
}
|
||||
}()
|
||||
|
||||
// Stream the tar content to response
|
||||
return c.SendStream(pr)
|
||||
}
|
||||
}
|
||||
|
||||
// extractLayers extracts layer digests from manifest
|
||||
func extractLayers(manifestData map[string]interface{}) ([]string, error) {
|
||||
var layers []string
|
||||
|
||||
// Try Docker manifest v2 schema 2 format or OCI manifest format
|
||||
if layersValue, ok := manifestData["layers"]; ok {
|
||||
if layersArray, ok := layersValue.([]interface{}); ok {
|
||||
for _, layer := range layersArray {
|
||||
if layerMap, ok := layer.(map[string]interface{}); ok {
|
||||
if digest, ok := layerMap["digest"].(string); ok {
|
||||
layers = append(layers, digest)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(layers) > 0 {
|
||||
return layers, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Try manifest list format (multi-arch)
|
||||
if _, ok := manifestData["manifests"].([]interface{}); ok {
|
||||
// For manifest list, we would need to fetch the actual manifest
|
||||
// For now, return error
|
||||
return nil, fmt.Errorf("manifest list format not supported for direct download")
|
||||
}
|
||||
|
||||
return nil, fmt.Errorf("no layers found in manifest")
|
||||
}
|
||||
|
||||
// writeTarFile writes a file to tar archive
|
||||
func writeTarFile(tw *tar.Writer, filename string, content []byte) error {
|
||||
header := &tar.Header{
|
||||
Name: filename,
|
||||
Size: int64(len(content)),
|
||||
Mode: 0644,
|
||||
}
|
||||
|
||||
if err := tw.WriteHeader(header); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err := tw.Write(content); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// writeTarFileStream writes a file to tar archive from an io.Reader (streaming)
|
||||
func writeTarFileStream(tw *tar.Writer, filename string, reader io.Reader) error {
|
||||
// First, we need to get the size by reading into a buffer
|
||||
// For true streaming without reading all content, we'd need to know size beforehand
|
||||
// Since we're reading from files, we can use io.Copy with a CountingReader
|
||||
|
||||
// Create a temporary buffer to count size
|
||||
var buf []byte
|
||||
var err error
|
||||
|
||||
// If reader is a *os.File, we can get size directly
|
||||
if file, ok := reader.(interface{ Stat() (interface{ Size() int64 }, error) }); ok {
|
||||
if stat, err := file.Stat(); err == nil {
|
||||
size := stat.Size()
|
||||
header := &tar.Header{
|
||||
Name: filename,
|
||||
Size: size,
|
||||
Mode: 0644,
|
||||
}
|
||||
|
||||
if err := tw.WriteHeader(header); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Stream copy without loading to memory
|
||||
if _, err := io.Copy(tw, reader); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Fallback: read all content (for readers that don't support Stat)
|
||||
buf, err = io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return writeTarFile(tw, filename, buf)
|
||||
}
|
||||
@@ -13,6 +13,16 @@ import (
|
||||
// RegistryImageList returns the list of images/repositories
|
||||
func RegistryImageList(ctx context.Context, db *gorm.DB, store store.Store) fiber.Handler {
|
||||
return func(c fiber.Ctx) error {
|
||||
// Get current registry_address setting
|
||||
var registryConfig model.RegistryConfig
|
||||
registryAddress := ""
|
||||
if err := db.Where("key = ?", "registry_address").First(®istryConfig).Error; err == nil {
|
||||
registryAddress = registryConfig.Value
|
||||
}
|
||||
if registryAddress == "" {
|
||||
registryAddress = "localhost:9119"
|
||||
}
|
||||
|
||||
var repositories []model.Repository
|
||||
|
||||
// Query all repositories from the database
|
||||
@@ -23,6 +33,17 @@ func RegistryImageList(ctx context.Context, db *gorm.DB, store store.Store) fibe
|
||||
// Convert to the expected format for the frontend
|
||||
var result []map[string]interface{}
|
||||
for _, repo := range repositories {
|
||||
// Get all tags for this repository
|
||||
var tags []model.Tag
|
||||
if err := db.Where("repository = ?", repo.Name).Find(&tags).Error; err != nil {
|
||||
continue // Skip this repository if we can't get tags
|
||||
}
|
||||
|
||||
// If no tags, skip this repository
|
||||
if len(tags) == 0 {
|
||||
continue
|
||||
}
|
||||
|
||||
// Calculate total size of all blobs for this repository
|
||||
var totalSize int64
|
||||
var sizeResult struct {
|
||||
@@ -39,14 +60,22 @@ func RegistryImageList(ctx context.Context, db *gorm.DB, store store.Store) fibe
|
||||
// Format updated_at to second precision
|
||||
uploadTime := repo.UpdatedAt.Format("2006-01-02 15:04:05")
|
||||
|
||||
// Create an entry for each tag with full image name
|
||||
// Dynamically prepend registry_address to the repository name
|
||||
for _, tag := range tags {
|
||||
fullRepoName := registryAddress + "/" + repo.Name
|
||||
fullImageName := fullRepoName + ":" + tag.Tag
|
||||
repoMap := map[string]interface{}{
|
||||
"id": repo.ID,
|
||||
"name": repo.Name,
|
||||
"name": fullImageName, // Full image name: registry_address/repo:tag
|
||||
"repository": repo.Name, // Original repository name (without registry_address)
|
||||
"tag": tag.Tag, // Tag name
|
||||
"upload_time": uploadTime,
|
||||
"size": totalSize,
|
||||
}
|
||||
result = append(result, repoMap)
|
||||
}
|
||||
}
|
||||
|
||||
return resp.R200(c, map[string]interface{}{
|
||||
"images": result,
|
||||
|
||||
416
internal/module/registry/handler.upload.go
Normal file
416
internal/module/registry/handler.upload.go
Normal file
@@ -0,0 +1,416 @@
|
||||
package registry
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"context"
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
|
||||
"gitea.loveuer.com/loveuer/cluster/internal/model"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/resp"
|
||||
"gitea.loveuer.com/loveuer/cluster/pkg/store"
|
||||
"github.com/gofiber/fiber/v3"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
type DockerManifestItem struct {
|
||||
Config string `json:"Config"`
|
||||
RepoTags []string `json:"RepoTags"`
|
||||
Layers []string `json:"Layers"`
|
||||
}
|
||||
|
||||
type OCIIndex struct {
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType"`
|
||||
Manifests []OCIManifestDescriptor `json:"manifests"`
|
||||
}
|
||||
|
||||
type OCIManifestDescriptor struct {
|
||||
MediaType string `json:"mediaType"`
|
||||
Digest string `json:"digest"`
|
||||
Size int64 `json:"size"`
|
||||
Annotations map[string]string `json:"annotations,omitempty"`
|
||||
}
|
||||
|
||||
type OCIManifest struct {
|
||||
SchemaVersion int `json:"schemaVersion"`
|
||||
MediaType string `json:"mediaType,omitempty"`
|
||||
Config OCIDescriptor `json:"config"`
|
||||
Layers []OCIDescriptor `json:"layers"`
|
||||
}
|
||||
|
||||
type OCIDescriptor struct {
|
||||
MediaType string `json:"mediaType"`
|
||||
Digest string `json:"digest"`
|
||||
Size int64 `json:"size"`
|
||||
}
|
||||
|
||||
func RegistryImageUpload(ctx context.Context, db *gorm.DB, store store.Store) fiber.Handler {
|
||||
return func(c fiber.Ctx) error {
|
||||
file, err := c.FormFile("file")
|
||||
if err != nil {
|
||||
return resp.R400(c, "MISSING_FILE", nil, "file is required")
|
||||
}
|
||||
|
||||
if !strings.HasSuffix(file.Filename, ".tar") {
|
||||
return resp.R400(c, "INVALID_FILE_TYPE", nil, "only .tar files are allowed")
|
||||
}
|
||||
|
||||
fileReader, err := file.Open()
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to open file: %w", err))
|
||||
}
|
||||
defer fileReader.Close()
|
||||
|
||||
tarReader := tar.NewReader(fileReader)
|
||||
|
||||
var manifestItems []DockerManifestItem
|
||||
var ociIndex *OCIIndex
|
||||
blobContents := make(map[string][]byte)
|
||||
layerContents := make(map[string][]byte)
|
||||
var configContent []byte
|
||||
var configDigest string
|
||||
var indexJSON []byte
|
||||
|
||||
for {
|
||||
header, err := tarReader.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to read tar: %w", err))
|
||||
}
|
||||
|
||||
content, err := io.ReadAll(tarReader)
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to read file content: %w", err))
|
||||
}
|
||||
|
||||
switch header.Name {
|
||||
case "manifest.json":
|
||||
if err := json.Unmarshal(content, &manifestItems); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to parse manifest.json: %w", err))
|
||||
}
|
||||
case "index.json":
|
||||
indexJSON = content
|
||||
if err := json.Unmarshal(content, &ociIndex); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to parse index.json: %w", err))
|
||||
}
|
||||
default:
|
||||
if strings.HasSuffix(header.Name, ".json") && header.Name != "manifest.json" && header.Name != "index.json" {
|
||||
configContent = content
|
||||
hash := sha256.Sum256(content)
|
||||
configDigest = "sha256:" + hex.EncodeToString(hash[:])
|
||||
} else if strings.HasSuffix(header.Name, "/layer") {
|
||||
layerContents[header.Name] = content
|
||||
} else if strings.Contains(header.Name, "blobs/sha256/") && !strings.HasSuffix(header.Name, "/") {
|
||||
blobContents[header.Name] = content
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Handle OCI format
|
||||
if ociIndex != nil && len(ociIndex.Manifests) > 0 {
|
||||
return handleOCIFormat(c, db, store, ociIndex, blobContents, indexJSON)
|
||||
}
|
||||
|
||||
// Handle Docker format
|
||||
if len(manifestItems) == 0 {
|
||||
return resp.R400(c, "INVALID_TAR", nil, "manifest.json or index.json not found in tar file")
|
||||
}
|
||||
|
||||
manifestItem := manifestItems[0]
|
||||
if len(manifestItem.RepoTags) == 0 {
|
||||
return resp.R400(c, "INVALID_MANIFEST", nil, "no RepoTags found in manifest")
|
||||
}
|
||||
|
||||
// Extract original repository and tag from tar file
|
||||
// e.g., "docker.io/redis:latest" -> repo: "docker.io/redis", tag: "latest"
|
||||
originalRepoTag := manifestItem.RepoTags[0]
|
||||
parts := strings.SplitN(originalRepoTag, ":", 2)
|
||||
originalRepo := parts[0]
|
||||
tag := "latest"
|
||||
if len(parts) == 2 {
|
||||
tag = parts[1]
|
||||
}
|
||||
|
||||
// Store only the original repository name (without registry_address prefix)
|
||||
// This allows registry_address to be changed without breaking existing images
|
||||
repoName := originalRepo
|
||||
|
||||
if err := db.FirstOrCreate(&model.Repository{}, model.Repository{Name: repoName}).Error; err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to create repository: %w", err))
|
||||
}
|
||||
|
||||
if err := store.CreatePartition(c.Context(), "registry"); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to create partition: %w", err))
|
||||
}
|
||||
|
||||
if configContent != nil {
|
||||
if err := store.WriteBlob(c.Context(), configDigest, strings.NewReader(string(configContent))); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write config blob: %w", err))
|
||||
}
|
||||
|
||||
if err := db.Create(&model.Blob{
|
||||
Digest: configDigest,
|
||||
Size: int64(len(configContent)),
|
||||
MediaType: "application/vnd.docker.container.image.v1+json",
|
||||
Repository: repoName,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save config blob metadata: %w", err))
|
||||
}
|
||||
}
|
||||
|
||||
layerDigests := make([]map[string]interface{}, 0, len(manifestItem.Layers))
|
||||
for _, layerPath := range manifestItem.Layers {
|
||||
content, ok := layerContents[layerPath]
|
||||
if !ok {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("layer %s not found in tar", layerPath))
|
||||
}
|
||||
|
||||
hash := sha256.Sum256(content)
|
||||
digest := "sha256:" + hex.EncodeToString(hash[:])
|
||||
|
||||
if err := store.WriteBlob(c.Context(), digest, strings.NewReader(string(content))); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write layer blob: %w", err))
|
||||
}
|
||||
|
||||
if err := db.Create(&model.Blob{
|
||||
Digest: digest,
|
||||
Size: int64(len(content)),
|
||||
MediaType: "application/vnd.docker.image.rootfs.diff.tar.gzip",
|
||||
Repository: repoName,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save layer blob metadata: %w", err))
|
||||
}
|
||||
|
||||
layerDigests = append(layerDigests, map[string]interface{}{
|
||||
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
|
||||
"size": len(content),
|
||||
"digest": digest,
|
||||
})
|
||||
}
|
||||
|
||||
manifestData := map[string]interface{}{
|
||||
"schemaVersion": 2,
|
||||
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
|
||||
"config": map[string]interface{}{
|
||||
"mediaType": "application/vnd.docker.container.image.v1+json",
|
||||
"size": len(configContent),
|
||||
"digest": configDigest,
|
||||
},
|
||||
"layers": layerDigests,
|
||||
}
|
||||
|
||||
manifestJSON, err := json.Marshal(manifestData)
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to marshal manifest: %w", err))
|
||||
}
|
||||
|
||||
manifestHash := sha256.Sum256(manifestJSON)
|
||||
manifestDigest := "sha256:" + hex.EncodeToString(manifestHash[:])
|
||||
|
||||
if err := store.WriteManifest(c.Context(), manifestDigest, manifestJSON); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write manifest: %w", err))
|
||||
}
|
||||
|
||||
if err := db.Create(&model.Manifest{
|
||||
Repository: repoName,
|
||||
Tag: tag,
|
||||
Digest: manifestDigest,
|
||||
MediaType: "application/vnd.docker.distribution.manifest.v2+json",
|
||||
Size: int64(len(manifestJSON)),
|
||||
Content: manifestJSON,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save manifest: %w", err))
|
||||
}
|
||||
|
||||
if err := db.Create(&model.Tag{
|
||||
Repository: repoName,
|
||||
Tag: tag,
|
||||
Digest: manifestDigest,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save tag: %w", err))
|
||||
}
|
||||
|
||||
return resp.R200(c, map[string]interface{}{
|
||||
"message": "upload success",
|
||||
"repository": repoName,
|
||||
"tag": tag,
|
||||
"digest": manifestDigest,
|
||||
"original_tag": originalRepoTag,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func handleOCIFormat(c fiber.Ctx, db *gorm.DB, store store.Store, ociIndex *OCIIndex, blobContents map[string][]byte, indexJSON []byte) error {
|
||||
// Get image name and tag from index annotations
|
||||
if len(ociIndex.Manifests) == 0 {
|
||||
return resp.R400(c, "INVALID_OCI_INDEX", nil, "no manifests found in index.json")
|
||||
}
|
||||
|
||||
manifestDesc := ociIndex.Manifests[0]
|
||||
imageName := ""
|
||||
tag := "latest"
|
||||
|
||||
if manifestDesc.Annotations != nil {
|
||||
if name, ok := manifestDesc.Annotations["io.containerd.image.name"]; ok {
|
||||
imageName = name
|
||||
} else if name, ok := manifestDesc.Annotations["org.opencontainers.image.ref.name"]; ok {
|
||||
tag = name
|
||||
}
|
||||
}
|
||||
|
||||
if imageName == "" {
|
||||
return resp.R400(c, "INVALID_OCI_INDEX", nil, "image name not found in annotations")
|
||||
}
|
||||
|
||||
// Extract repo and tag from image name
|
||||
parts := strings.SplitN(imageName, ":", 2)
|
||||
repoName := parts[0]
|
||||
if len(parts) == 2 {
|
||||
tag = parts[1]
|
||||
}
|
||||
|
||||
// Create repository
|
||||
if err := db.FirstOrCreate(&model.Repository{}, model.Repository{Name: repoName}).Error; err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to create repository: %w", err))
|
||||
}
|
||||
|
||||
if err := store.CreatePartition(c.Context(), "registry"); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to create partition: %w", err))
|
||||
}
|
||||
|
||||
// Store index.json as a blob
|
||||
indexHash := sha256.Sum256(indexJSON)
|
||||
indexDigest := "sha256:" + hex.EncodeToString(indexHash[:])
|
||||
if err := store.WriteBlob(c.Context(), indexDigest, strings.NewReader(string(indexJSON))); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write index blob: %w", err))
|
||||
}
|
||||
if err := db.Create(&model.Blob{
|
||||
Digest: indexDigest,
|
||||
Size: int64(len(indexJSON)),
|
||||
MediaType: "application/vnd.oci.image.index.v1+json",
|
||||
Repository: repoName,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save index blob metadata: %w", err))
|
||||
}
|
||||
|
||||
// Process the manifest blob
|
||||
manifestBlobPath := "blobs/sha256/" + strings.TrimPrefix(manifestDesc.Digest, "sha256:")
|
||||
manifestContent, ok := blobContents[manifestBlobPath]
|
||||
if !ok {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("manifest blob %s not found in tar", manifestBlobPath))
|
||||
}
|
||||
|
||||
// Parse OCI manifest
|
||||
var ociManifest OCIManifest
|
||||
if err := json.Unmarshal(manifestContent, &ociManifest); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to parse OCI manifest: %w", err))
|
||||
}
|
||||
|
||||
// Store config blob
|
||||
configBlobPath := "blobs/sha256/" + strings.TrimPrefix(ociManifest.Config.Digest, "sha256:")
|
||||
configContent, ok := blobContents[configBlobPath]
|
||||
if !ok {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("config blob %s not found in tar", configBlobPath))
|
||||
}
|
||||
|
||||
if err := store.WriteBlob(c.Context(), ociManifest.Config.Digest, strings.NewReader(string(configContent))); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write config blob: %w", err))
|
||||
}
|
||||
if err := db.Create(&model.Blob{
|
||||
Digest: ociManifest.Config.Digest,
|
||||
Size: ociManifest.Config.Size,
|
||||
MediaType: ociManifest.Config.MediaType,
|
||||
Repository: repoName,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save config blob metadata: %w", err))
|
||||
}
|
||||
|
||||
// Store layer blobs
|
||||
for _, layer := range ociManifest.Layers {
|
||||
layerBlobPath := "blobs/sha256/" + strings.TrimPrefix(layer.Digest, "sha256:")
|
||||
layerContent, ok := blobContents[layerBlobPath]
|
||||
if !ok {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("layer blob %s not found in tar", layerBlobPath))
|
||||
}
|
||||
|
||||
if err := store.WriteBlob(c.Context(), layer.Digest, strings.NewReader(string(layerContent))); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write layer blob: %w", err))
|
||||
}
|
||||
if err := db.Create(&model.Blob{
|
||||
Digest: layer.Digest,
|
||||
Size: layer.Size,
|
||||
MediaType: layer.MediaType,
|
||||
Repository: repoName,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save layer blob metadata: %w", err))
|
||||
}
|
||||
}
|
||||
|
||||
// Convert OCI manifest to Docker manifest v2 format for compatibility
|
||||
manifestData := map[string]interface{}{
|
||||
"schemaVersion": 2,
|
||||
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
|
||||
"config": map[string]interface{}{
|
||||
"mediaType": "application/vnd.docker.container.image.v1+json",
|
||||
"size": ociManifest.Config.Size,
|
||||
"digest": ociManifest.Config.Digest,
|
||||
},
|
||||
"layers": []map[string]interface{}{},
|
||||
}
|
||||
|
||||
layers := []map[string]interface{}{}
|
||||
for _, layer := range ociManifest.Layers {
|
||||
layers = append(layers, map[string]interface{}{
|
||||
"mediaType": "application/vnd.docker.image.rootfs.diff.tar.gzip",
|
||||
"size": layer.Size,
|
||||
"digest": layer.Digest,
|
||||
})
|
||||
}
|
||||
manifestData["layers"] = layers
|
||||
|
||||
manifestJSON, err := json.Marshal(manifestData)
|
||||
if err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to marshal manifest: %w", err))
|
||||
}
|
||||
|
||||
manifestHash := sha256.Sum256(manifestJSON)
|
||||
manifestDigest := "sha256:" + hex.EncodeToString(manifestHash[:])
|
||||
|
||||
if err := store.WriteManifest(c.Context(), manifestDigest, manifestJSON); err != nil {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to write manifest: %w", err))
|
||||
}
|
||||
|
||||
if err := db.Create(&model.Manifest{
|
||||
Repository: repoName,
|
||||
Tag: tag,
|
||||
Digest: manifestDigest,
|
||||
MediaType: "application/vnd.docker.distribution.manifest.v2+json",
|
||||
Size: int64(len(manifestJSON)),
|
||||
Content: manifestJSON,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save manifest: %w", err))
|
||||
}
|
||||
|
||||
if err := db.Create(&model.Tag{
|
||||
Repository: repoName,
|
||||
Tag: tag,
|
||||
Digest: manifestDigest,
|
||||
}).Error; err != nil && !strings.Contains(err.Error(), "UNIQUE constraint failed") {
|
||||
return resp.R500(c, "", nil, fmt.Errorf("failed to save tag: %w", err))
|
||||
}
|
||||
|
||||
return resp.R200(c, map[string]interface{}{
|
||||
"message": "upload success (OCI format)",
|
||||
"repository": repoName,
|
||||
"tag": tag,
|
||||
"digest": manifestDigest,
|
||||
"original_tag": imageName,
|
||||
})
|
||||
}
|
||||
@@ -75,6 +75,17 @@ func HandleManifest(c fiber.Ctx, db *gorm.DB, store store.Store) error {
|
||||
|
||||
// ???? manifests ???????
|
||||
repo := strings.Join(parts[:manifestsIndex], "/")
|
||||
|
||||
// Strip registry_address prefix from repo if present
|
||||
var registryConfig model.RegistryConfig
|
||||
registryAddress := ""
|
||||
if err := db.Where("key = ?", "registry_address").First(®istryConfig).Error; err == nil {
|
||||
registryAddress = registryConfig.Value
|
||||
}
|
||||
if registryAddress != "" && strings.HasPrefix(repo, registryAddress+"/") {
|
||||
repo = strings.TrimPrefix(repo, registryAddress+"/")
|
||||
}
|
||||
|
||||
// tag ? manifests ?????
|
||||
tag := parts[manifestsIndex+1]
|
||||
|
||||
|
||||
@@ -13,13 +13,13 @@ import (
|
||||
)
|
||||
|
||||
func Registry(ctx context.Context, db *gorm.DB, store store.Store) fiber.Handler {
|
||||
// ???????
|
||||
if err := db.AutoMigrate(
|
||||
&model.Repository{},
|
||||
&model.Blob{},
|
||||
&model.Manifest{},
|
||||
&model.Tag{},
|
||||
&model.BlobUpload{},
|
||||
&model.RegistryConfig{},
|
||||
); err != nil {
|
||||
log.Fatalf("failed to migrate database: %v", err)
|
||||
}
|
||||
|
||||
@@ -39,6 +39,16 @@ func HandleTags(c fiber.Ctx, db *gorm.DB, store store.Store) error {
|
||||
// ???? tags ???????
|
||||
repo := strings.Join(parts[:tagsIndex], "/")
|
||||
|
||||
// Strip registry_address prefix from repo if present
|
||||
var registryConfig model.RegistryConfig
|
||||
registryAddress := ""
|
||||
if err := db.Where("key = ?", "registry_address").First(®istryConfig).Error; err == nil {
|
||||
registryAddress = registryConfig.Value
|
||||
}
|
||||
if registryAddress != "" && strings.HasPrefix(repo, registryAddress+"/") {
|
||||
repo = strings.TrimPrefix(repo, registryAddress+"/")
|
||||
}
|
||||
|
||||
// ??????
|
||||
nStr := c.Query("n", "100")
|
||||
n, err := strconv.Atoi(nStr)
|
||||
|
||||
2
main.go
2
main.go
@@ -19,6 +19,6 @@ func main() {
|
||||
defer cancel()
|
||||
|
||||
if err := cmd.Run(ctx); err != nil {
|
||||
log.Fatalf("Failed to run command: %v", err)
|
||||
log.Fatalf("[F] Failed to run command: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
38
pkg/tool/ctx.go
Normal file
38
pkg/tool/ctx.go
Normal file
@@ -0,0 +1,38 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"context"
|
||||
"time"
|
||||
)
|
||||
|
||||
func Timeout(seconds ...int) (ctx context.Context) {
|
||||
var (
|
||||
duration time.Duration
|
||||
)
|
||||
|
||||
if len(seconds) > 0 && seconds[0] > 0 {
|
||||
duration = time.Duration(seconds[0]) * time.Second
|
||||
} else {
|
||||
duration = time.Duration(30) * time.Second
|
||||
}
|
||||
|
||||
ctx, _ = context.WithTimeout(context.Background(), duration)
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func TimeoutCtx(ctx context.Context, seconds ...int) context.Context {
|
||||
var (
|
||||
duration time.Duration
|
||||
)
|
||||
|
||||
if len(seconds) > 0 && seconds[0] > 0 {
|
||||
duration = time.Duration(seconds[0]) * time.Second
|
||||
} else {
|
||||
duration = time.Duration(30) * time.Second
|
||||
}
|
||||
|
||||
nctx, _ := context.WithTimeout(ctx, duration)
|
||||
|
||||
return nctx
|
||||
}
|
||||
157
pkg/tool/file.go
Normal file
157
pkg/tool/file.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
// FileMD5 calculate file md5
|
||||
// - if file not exist, return ""
|
||||
// - if _path is dir, return ""
|
||||
func FileMD5(_path string) string {
|
||||
// 检查文件是否存在
|
||||
fileInfo, err := os.Stat(_path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
return ""
|
||||
}
|
||||
// 其他错误也返回空字符串
|
||||
return ""
|
||||
}
|
||||
|
||||
// 检查是否是目录
|
||||
if fileInfo.IsDir() {
|
||||
return ""
|
||||
}
|
||||
|
||||
// 打开文件
|
||||
file, err := os.Open(_path)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
// 创建MD5哈希计算器
|
||||
hash := md5.New()
|
||||
|
||||
// 将文件内容复制到哈希计算器
|
||||
if _, err := io.Copy(hash, file); err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
// 计算并返回MD5哈希值(十六进制字符串)
|
||||
return fmt.Sprintf("%x", hash.Sum(nil))
|
||||
}
|
||||
|
||||
// CopyFile copies a file from src to dst.
|
||||
// Returns an error if source/destination are the same, source isn't a regular file,
|
||||
// or any step in the copy process fails.
|
||||
func CopyFile(src, dst string) error {
|
||||
// Open source file
|
||||
srcFile, err := os.Open(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open source: %w", err)
|
||||
}
|
||||
defer srcFile.Close()
|
||||
|
||||
// Get source file metadata
|
||||
srcInfo, err := srcFile.Stat()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get source info: %w", err)
|
||||
}
|
||||
|
||||
// Verify source is a regular file
|
||||
if !srcInfo.Mode().IsRegular() {
|
||||
return fmt.Errorf("source is not a regular file")
|
||||
}
|
||||
|
||||
// Check if source and destination are the same file
|
||||
if same, err := sameFile(src, dst, srcInfo); same {
|
||||
return fmt.Errorf("source and destination are the same file")
|
||||
} else if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create destination directory structure
|
||||
if err := os.MkdirAll(filepath.Dir(dst), 0755); err != nil {
|
||||
return fmt.Errorf("failed to create destination directory: %w", err)
|
||||
}
|
||||
|
||||
// Create destination file with source permissions
|
||||
dstFile, err := os.OpenFile(dst, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, srcInfo.Mode())
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create destination: %w", err)
|
||||
}
|
||||
|
||||
// Copy contents and handle destination close errors
|
||||
_, err = io.Copy(dstFile, srcFile)
|
||||
if closeErr := dstFile.Close(); closeErr != nil && err == nil {
|
||||
err = fmt.Errorf("failed to close destination: %w", closeErr)
|
||||
}
|
||||
if err != nil {
|
||||
return fmt.Errorf("copy failed: %w", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// sameFile checks if src and dst refer to the same file using device/inode numbers
|
||||
func sameFile(src, dst string, srcInfo os.FileInfo) (bool, error) {
|
||||
dstInfo, err := os.Stat(dst)
|
||||
if os.IsNotExist(err) {
|
||||
return false, nil // Destination doesn't exist
|
||||
}
|
||||
if err != nil {
|
||||
return false, err // Other errors
|
||||
}
|
||||
return os.SameFile(srcInfo, dstInfo), nil
|
||||
}
|
||||
|
||||
func CopyDir(src, dst string) error {
|
||||
// todo: copy src dir to dst dir recursively
|
||||
// if dst is not exist, create it
|
||||
// if file exist, overwrite it
|
||||
srcInfo, err := os.Stat(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("stat src dir failed: %w", err)
|
||||
}
|
||||
if !srcInfo.IsDir() {
|
||||
return fmt.Errorf("source is not a directory")
|
||||
}
|
||||
|
||||
// Create destination directory if it does not exist
|
||||
if err := os.MkdirAll(dst, srcInfo.Mode()); err != nil {
|
||||
return fmt.Errorf("failed to create destination directory: %w", err)
|
||||
}
|
||||
|
||||
entries, err := os.ReadDir(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to read source directory: %w", err)
|
||||
}
|
||||
|
||||
for _, entry := range entries {
|
||||
srcPath := filepath.Join(src, entry.Name())
|
||||
dstPath := filepath.Join(dst, entry.Name())
|
||||
|
||||
info, err := entry.Info()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get info for %s: %w", srcPath, err)
|
||||
}
|
||||
|
||||
if info.IsDir() {
|
||||
// Recursively copy subdirectory
|
||||
if err := CopyDir(srcPath, dstPath); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
// Copy file, overwrite if exists
|
||||
if err := CopyFile(srcPath, dstPath); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
76
pkg/tool/human.go
Normal file
76
pkg/tool/human.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package tool
|
||||
|
||||
import "fmt"
|
||||
|
||||
const (
|
||||
_ = iota
|
||||
KB = 1 << (10 * iota) // 1 KB = 1024 bytes
|
||||
MB // 1 MB = 1024 KB
|
||||
GB // 1 GB = 1024 MB
|
||||
TB // 1 TB = 1024 GB
|
||||
PB // 1 PB = 1024 TB
|
||||
)
|
||||
|
||||
func HumanDuration(nano int64) string {
|
||||
duration := float64(nano)
|
||||
unit := "ns"
|
||||
if duration >= 1000 {
|
||||
duration /= 1000
|
||||
unit = "us"
|
||||
}
|
||||
|
||||
if duration >= 1000 {
|
||||
duration /= 1000
|
||||
unit = "ms"
|
||||
}
|
||||
|
||||
if duration >= 1000 {
|
||||
duration /= 1000
|
||||
unit = " s"
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%6.2f%s", duration, unit)
|
||||
}
|
||||
|
||||
func HumanSize(size int64) string {
|
||||
|
||||
switch {
|
||||
case size >= PB:
|
||||
return fmt.Sprintf("%.2f PB", float64(size)/PB)
|
||||
case size >= TB:
|
||||
return fmt.Sprintf("%.2f TB", float64(size)/TB)
|
||||
case size >= GB:
|
||||
return fmt.Sprintf("%.2f GB", float64(size)/GB)
|
||||
case size >= MB:
|
||||
return fmt.Sprintf("%.2f MB", float64(size)/MB)
|
||||
case size >= KB:
|
||||
return fmt.Sprintf("%.2f KB", float64(size)/KB)
|
||||
default:
|
||||
return fmt.Sprintf("%d bytes", size)
|
||||
}
|
||||
}
|
||||
|
||||
// BytesToUnit 将字节转换为指定单位
|
||||
func BytesToUnit(bytes int64, unit float64) float64 {
|
||||
return float64(bytes) / unit
|
||||
}
|
||||
|
||||
// BytesToKB 转换为 KB
|
||||
func BytesToKB(bytes int64) float64 {
|
||||
return BytesToUnit(bytes, KB)
|
||||
}
|
||||
|
||||
// BytesToMB 转换为 MB
|
||||
func BytesToMB(bytes int64) float64 {
|
||||
return BytesToUnit(bytes, MB)
|
||||
}
|
||||
|
||||
// BytesToGB 转换为 GB
|
||||
func BytesToGB(bytes int64) float64 {
|
||||
return BytesToUnit(bytes, GB)
|
||||
}
|
||||
|
||||
// BytesToTB 转换为 TB
|
||||
func BytesToTB(bytes int64) float64 {
|
||||
return BytesToUnit(bytes, TB)
|
||||
}
|
||||
229
pkg/tool/ip.go
Normal file
229
pkg/tool/ip.go
Normal file
@@ -0,0 +1,229 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
)
|
||||
|
||||
var (
|
||||
privateIPv4Blocks []*net.IPNet
|
||||
privateIPv6Blocks []*net.IPNet
|
||||
)
|
||||
|
||||
func init() {
|
||||
// IPv4私有地址段
|
||||
for _, cidr := range []string{
|
||||
"10.0.0.0/8", // A类私有地址
|
||||
"172.16.0.0/12", // B类私有地址
|
||||
"192.168.0.0/16", // C类私有地址
|
||||
"169.254.0.0/16", // 链路本地地址
|
||||
"127.0.0.0/8", // 环回地址
|
||||
} {
|
||||
_, block, _ := net.ParseCIDR(cidr)
|
||||
privateIPv4Blocks = append(privateIPv4Blocks, block)
|
||||
}
|
||||
|
||||
// IPv6私有地址段
|
||||
for _, cidr := range []string{
|
||||
"fc00::/7", // 唯一本地地址
|
||||
"fe80::/10", // 链路本地地址
|
||||
"::1/128", // 环回地址
|
||||
} {
|
||||
_, block, _ := net.ParseCIDR(cidr)
|
||||
privateIPv6Blocks = append(privateIPv6Blocks, block)
|
||||
}
|
||||
}
|
||||
|
||||
func IsPrivateIP(ipStr string) bool {
|
||||
ip := net.ParseIP(ipStr)
|
||||
if ip == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
// 处理IPv4和IPv4映射的IPv6地址
|
||||
if ip4 := ip.To4(); ip4 != nil {
|
||||
for _, block := range privateIPv4Blocks {
|
||||
if block.Contains(ip4) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// 处理IPv6地址
|
||||
for _, block := range privateIPv6Blocks {
|
||||
if block.Contains(ip) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func IP2Int(ip net.IP) uint32 {
|
||||
if ip == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
ip = ip.To4()
|
||||
if ip == nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return binary.BigEndian.Uint32(ip)
|
||||
}
|
||||
|
||||
func Int2IP(ip uint32) net.IP {
|
||||
data := make(net.IP, 4)
|
||||
binary.BigEndian.PutUint32(data, ip)
|
||||
return data
|
||||
}
|
||||
|
||||
func IPStr2Int(ipStr string) *uint32 {
|
||||
ip := IP2Int(net.ParseIP(ipStr))
|
||||
if ip == 0 {
|
||||
return nil
|
||||
}
|
||||
return &ip
|
||||
}
|
||||
|
||||
func Int2IPStr(ip uint32) string {
|
||||
return Int2IP(ip).String()
|
||||
}
|
||||
|
||||
func GetLastIP4(cidr string) (lastIP4 uint32, err error) {
|
||||
ip, ipNet, err := net.ParseCIDR(cidr)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
firstIP4 := IP2Int(ip.Mask(ipNet.Mask))
|
||||
ipNetMaskInt := binary.BigEndian.Uint32(ipNet.Mask)
|
||||
lastIP4 = firstIP4 | ^ipNetMaskInt
|
||||
return
|
||||
}
|
||||
|
||||
func IsCIDRConflict(cidr1, cidr2 string) (conflict bool, err error) {
|
||||
_, ipNet1, err := net.ParseCIDR(cidr1)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
_, ipNet2, err := net.ParseCIDR(cidr2)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if ipNet2.Contains(ipNet1.IP) || ipNet1.Contains(ipNet2.IP) {
|
||||
conflict = true
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func GetCIDRs(startCIDR, endCIDR string, mask uint8) (cidrs []string, err error) {
|
||||
cidrs = append(cidrs, startCIDR)
|
||||
|
||||
currentCIDR := startCIDR
|
||||
for {
|
||||
lastIP4, err := GetLastIP4(currentCIDR)
|
||||
if err != nil {
|
||||
return cidrs, err
|
||||
}
|
||||
|
||||
nextCIDR := Int2IPStr(lastIP4+1) + fmt.Sprintf("/%d", mask)
|
||||
cidrs = append(cidrs, nextCIDR)
|
||||
|
||||
conflict, err := IsCIDRConflict(nextCIDR, endCIDR)
|
||||
if err != nil {
|
||||
return cidrs, err
|
||||
}
|
||||
if conflict {
|
||||
break
|
||||
}
|
||||
|
||||
currentCIDR = nextCIDR
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func GetLocalIP() (ip string, err error) {
|
||||
ifaces, err := net.Interfaces()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
for _, iface := range ifaces {
|
||||
if iface.Name != "eth0" {
|
||||
continue
|
||||
}
|
||||
|
||||
addrs, err := iface.Addrs()
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
for _, addr := range addrs {
|
||||
switch v := addr.(type) {
|
||||
case *net.IPNet:
|
||||
ip = v.IP.String()
|
||||
if ip == "192.168.88.88" {
|
||||
continue
|
||||
}
|
||||
return ip, err
|
||||
case *net.IPAddr:
|
||||
ip = v.IP.String()
|
||||
if ip == "192.168.88.88" {
|
||||
continue
|
||||
}
|
||||
return ip, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ip = "127.0.0.1"
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
func LookupIP(host string) (ip string, err error) {
|
||||
ips, err := net.LookupIP(host)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
for _, i := range ips {
|
||||
if ipv4 := i.To4(); ipv4 != nil {
|
||||
ip = ipv4.String()
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
||||
func LookupIPv4(host string) (uint32, error) {
|
||||
ips, err := net.LookupIP(host)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
for _, i := range ips {
|
||||
if ipv4 := i.To4(); ipv4 != nil {
|
||||
return binary.BigEndian.Uint32(ipv4), nil
|
||||
}
|
||||
}
|
||||
|
||||
return 0, errors.New("host not found " + host)
|
||||
}
|
||||
|
||||
func ResolveIPv4(host string) (uint32, error) {
|
||||
addr, err := net.ResolveIPAddr("ip4", host)
|
||||
if err == nil && addr != nil {
|
||||
addrV4 := binary.BigEndian.Uint32(addr.IP.To4())
|
||||
|
||||
return addrV4, err
|
||||
}
|
||||
|
||||
return 0, err
|
||||
}
|
||||
76
pkg/tool/loadash.go
Normal file
76
pkg/tool/loadash.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package tool
|
||||
|
||||
import "math"
|
||||
|
||||
func Map[T, R any](vals []T, fn func(item T, index int) R) []R {
|
||||
var result = make([]R, len(vals))
|
||||
for idx, v := range vals {
|
||||
result[idx] = fn(v, idx)
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func Chunk[T any](vals []T, size int) [][]T {
|
||||
if size <= 0 {
|
||||
panic("Second parameter must be greater than 0")
|
||||
}
|
||||
|
||||
chunksNum := len(vals) / size
|
||||
if len(vals)%size != 0 {
|
||||
chunksNum += 1
|
||||
}
|
||||
|
||||
result := make([][]T, 0, chunksNum)
|
||||
|
||||
for i := 0; i < chunksNum; i++ {
|
||||
last := (i + 1) * size
|
||||
if last > len(vals) {
|
||||
last = len(vals)
|
||||
}
|
||||
result = append(result, vals[i*size:last:last])
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
// 对 vals 取样 x 个
|
||||
func Sample[T any](vals []T, x int) []T {
|
||||
if x < 0 {
|
||||
panic("Second parameter can't be negative")
|
||||
}
|
||||
|
||||
n := len(vals)
|
||||
if n == 0 {
|
||||
return []T{}
|
||||
}
|
||||
|
||||
if x >= n {
|
||||
return vals
|
||||
}
|
||||
|
||||
// 处理x=1的特殊情况
|
||||
if x == 1 {
|
||||
return []T{vals[(n-1)/2]}
|
||||
}
|
||||
|
||||
// 计算采样步长并生成结果数组
|
||||
step := float64(n-1) / float64(x-1)
|
||||
result := make([]T, x)
|
||||
|
||||
for i := 0; i < x; i++ {
|
||||
// 计算采样位置并四舍五入
|
||||
pos := float64(i) * step
|
||||
index := int(math.Round(pos))
|
||||
result[i] = vals[index]
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func If[T any](cond bool, trueVal, falseVal T) T {
|
||||
if cond {
|
||||
return trueVal
|
||||
}
|
||||
|
||||
return falseVal
|
||||
}
|
||||
57
pkg/tool/mask.go
Normal file
57
pkg/tool/mask.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
func MaskJWT(token string) string {
|
||||
if token == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
parts := strings.Split(token, ".")
|
||||
if len(parts) != 3 {
|
||||
return MaskString(token, 5, 5, -1, "*")
|
||||
}
|
||||
|
||||
h, p, s := parts[0], parts[1], parts[2]
|
||||
h, p, s = MaskString(h, 5, 5, 8, "*"), MaskString(p, 5, 5, 8, "*"), MaskString(s, 5, 5, 8, "*")
|
||||
return h + "." + p + "." + s
|
||||
}
|
||||
|
||||
// MaskString 将字符串中间部分替换为 maskChar
|
||||
//
|
||||
// start: 保留前 start 个字符
|
||||
// end: 保留后 end 个字符
|
||||
// maskLen: 中间打码长度 (小于 0 标识保持原有长度)
|
||||
// maskChar: 打码字符
|
||||
func MaskString(s string, start, end, maskLen int, maskChar string) string {
|
||||
if s == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
totalLen := len(s)
|
||||
if start < 0 {
|
||||
start = 0
|
||||
}
|
||||
|
||||
if end < 0 {
|
||||
end = 0
|
||||
}
|
||||
|
||||
if maskChar == "" {
|
||||
maskChar = "*"
|
||||
}
|
||||
|
||||
maxMaskLen := totalLen - start - end
|
||||
if maxMaskLen <= 0 {
|
||||
return strings.Repeat(maskChar, totalLen)
|
||||
}
|
||||
|
||||
if maskLen < 0 || maskLen > maxMaskLen {
|
||||
maskLen = maxMaskLen
|
||||
}
|
||||
|
||||
startPart, endPart := s[:start], s[totalLen-end:]
|
||||
return startPart + strings.Repeat(maskChar, maskLen) + endPart
|
||||
}
|
||||
76
pkg/tool/must.go
Normal file
76
pkg/tool/must.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"reflect"
|
||||
"runtime"
|
||||
"sync"
|
||||
)
|
||||
|
||||
func Must(errs ...error) {
|
||||
for _, err := range errs {
|
||||
if err != nil {
|
||||
log.Panic(err.Error())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func MustWithData[T any](data T, err error) T {
|
||||
Must(err)
|
||||
return data
|
||||
}
|
||||
|
||||
func MustStop(ctx context.Context, stopFns ...func(ctx context.Context) error) {
|
||||
getFunctionName := func(i interface{}) string {
|
||||
return runtime.FuncForPC(reflect.ValueOf(i).Pointer()).Name()
|
||||
}
|
||||
|
||||
if len(stopFns) == 0 {
|
||||
return
|
||||
}
|
||||
|
||||
ok := make(chan struct{})
|
||||
|
||||
wg := &sync.WaitGroup{}
|
||||
|
||||
for _, fn := range stopFns {
|
||||
|
||||
if fn != nil {
|
||||
wg.Add(1)
|
||||
|
||||
go func(c context.Context) {
|
||||
defer func() {
|
||||
wg.Done()
|
||||
log.Printf("stop func[%s] done", getFunctionName(fn))
|
||||
}()
|
||||
|
||||
if err := fn(c); err != nil {
|
||||
log.Printf("stop function failed, err = %s", err.Error())
|
||||
}
|
||||
}(ctx)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
go func() {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
log.Fatal("stop function timeout, force down")
|
||||
case _, _ = <-ok:
|
||||
log.Printf("shutdown gracefully...")
|
||||
return
|
||||
}
|
||||
}()
|
||||
|
||||
wg.Wait()
|
||||
close(ok)
|
||||
}
|
||||
|
||||
func IgnoreError[T any](item T, err error) T {
|
||||
if err != nil {
|
||||
log.Printf("[W] !!! ignore error: %s", err.Error())
|
||||
}
|
||||
|
||||
return item
|
||||
}
|
||||
85
pkg/tool/password.go
Normal file
85
pkg/tool/password.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"log"
|
||||
"regexp"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"golang.org/x/crypto/pbkdf2"
|
||||
)
|
||||
|
||||
const (
|
||||
EncryptHeader string = "pbkdf2:sha256" // 用户密码加密
|
||||
)
|
||||
|
||||
func NewPassword(password string) string {
|
||||
return EncryptPassword(password, RandomString(8), int(RandomInt(50000)+100000))
|
||||
}
|
||||
|
||||
func ComparePassword(in, db string) bool {
|
||||
strs := strings.Split(db, "$")
|
||||
if len(strs) != 3 {
|
||||
log.Printf("[E] password in db invalid: %s", db)
|
||||
return false
|
||||
}
|
||||
|
||||
encs := strings.Split(strs[0], ":")
|
||||
if len(encs) != 3 {
|
||||
log.Printf("[E] password in db invalid: %s", db)
|
||||
return false
|
||||
}
|
||||
|
||||
encIteration, err := strconv.Atoi(encs[2])
|
||||
if err != nil {
|
||||
log.Printf("[E] password in db invalid: %s, convert iter err: %s", db, err)
|
||||
return false
|
||||
}
|
||||
|
||||
return EncryptPassword(in, strs[1], encIteration) == db
|
||||
}
|
||||
|
||||
func EncryptPassword(password, salt string, iter int) string {
|
||||
hash := pbkdf2.Key([]byte(password), []byte(salt), iter, 32, sha256.New)
|
||||
encrypted := hex.EncodeToString(hash)
|
||||
return fmt.Sprintf("%s:%d$%s$%s", EncryptHeader, iter, salt, encrypted)
|
||||
}
|
||||
|
||||
func CheckPassword(password string) error {
|
||||
if len(password) < 8 || len(password) > 32 {
|
||||
return errors.New("密码长度不符合")
|
||||
}
|
||||
|
||||
var (
|
||||
err error
|
||||
match bool
|
||||
patternList = []string{`[0-9]+`, `[a-z]+`, `[A-Z]+`, `[!@#%]+`} //, `[~!@#$%^&*?_-]+`}
|
||||
matchAccount = 0
|
||||
tips = []string{"缺少数字", "缺少小写字母", "缺少大写字母", "缺少'!@#%'"}
|
||||
locktips = make([]string, 0)
|
||||
)
|
||||
|
||||
for idx, pattern := range patternList {
|
||||
match, err = regexp.MatchString(pattern, password)
|
||||
if err != nil {
|
||||
log.Printf("[E] regex match string err, reg_str: %s, err: %v", pattern, err)
|
||||
return errors.New("密码强度不够")
|
||||
}
|
||||
|
||||
if match {
|
||||
matchAccount++
|
||||
} else {
|
||||
locktips = append(locktips, tips[idx])
|
||||
}
|
||||
}
|
||||
|
||||
if matchAccount < 3 {
|
||||
return fmt.Errorf("密码强度不够, 可能 %s", strings.Join(locktips, ", "))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
20
pkg/tool/password_test.go
Normal file
20
pkg/tool/password_test.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package tool
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestEncPassword(t *testing.T) {
|
||||
password := "123456"
|
||||
|
||||
result := EncryptPassword(password, RandomString(8), 50000)
|
||||
|
||||
t.Logf("sum => %s", result)
|
||||
}
|
||||
|
||||
func TestPassword(t *testing.T) {
|
||||
p := "wahaha@123"
|
||||
p = NewPassword(p)
|
||||
t.Logf("password => %s", p)
|
||||
|
||||
result := ComparePassword("wahaha@123", p)
|
||||
t.Logf("compare result => %v", result)
|
||||
}
|
||||
75
pkg/tool/random.go
Normal file
75
pkg/tool/random.go
Normal file
@@ -0,0 +1,75 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"math/big"
|
||||
mrand "math/rand"
|
||||
)
|
||||
|
||||
var (
|
||||
letters = []byte("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
|
||||
letterNum = []byte("0123456789")
|
||||
letterLow = []byte("abcdefghijklmnopqrstuvwxyz")
|
||||
letterCap = []byte("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
|
||||
letterSyb = []byte("!@#$%^&*()_+-=")
|
||||
adjectives = []string{
|
||||
"开心的", "灿烂的", "温暖的", "阳光的", "活泼的",
|
||||
"聪明的", "优雅的", "幸运的", "甜蜜的", "勇敢的",
|
||||
"宁静的", "热情的", "温柔的", "幽默的", "坚强的",
|
||||
"迷人的", "神奇的", "快乐的", "健康的", "自由的",
|
||||
"梦幻的", "勤劳的", "真诚的", "浪漫的", "自信的",
|
||||
}
|
||||
|
||||
plants = []string{
|
||||
"苹果", "香蕉", "橘子", "葡萄", "草莓",
|
||||
"西瓜", "樱桃", "菠萝", "柠檬", "蜜桃",
|
||||
"蓝莓", "芒果", "石榴", "甜瓜", "雪梨",
|
||||
"番茄", "南瓜", "土豆", "青椒", "洋葱",
|
||||
"黄瓜", "萝卜", "豌豆", "玉米", "蘑菇",
|
||||
"菠菜", "茄子", "芹菜", "莲藕", "西兰花",
|
||||
}
|
||||
)
|
||||
|
||||
func RandomInt(max int64) int64 {
|
||||
num, _ := rand.Int(rand.Reader, big.NewInt(max))
|
||||
return num.Int64()
|
||||
}
|
||||
|
||||
func RandomString(length int) string {
|
||||
result := make([]byte, length)
|
||||
for i := 0; i < length; i++ {
|
||||
num, _ := rand.Int(rand.Reader, big.NewInt(int64(len(letters))))
|
||||
result[i] = letters[num.Int64()]
|
||||
}
|
||||
return string(result)
|
||||
}
|
||||
|
||||
func RandomPassword(length int, withSymbol bool) string {
|
||||
result := make([]byte, length)
|
||||
kind := 3
|
||||
if withSymbol {
|
||||
kind++
|
||||
}
|
||||
|
||||
for i := 0; i < length; i++ {
|
||||
switch i % kind {
|
||||
case 0:
|
||||
num, _ := rand.Int(rand.Reader, big.NewInt(int64(len(letterNum))))
|
||||
result[i] = letterNum[num.Int64()]
|
||||
case 1:
|
||||
num, _ := rand.Int(rand.Reader, big.NewInt(int64(len(letterLow))))
|
||||
result[i] = letterLow[num.Int64()]
|
||||
case 2:
|
||||
num, _ := rand.Int(rand.Reader, big.NewInt(int64(len(letterCap))))
|
||||
result[i] = letterCap[num.Int64()]
|
||||
case 3:
|
||||
num, _ := rand.Int(rand.Reader, big.NewInt(int64(len(letterSyb))))
|
||||
result[i] = letterSyb[num.Int64()]
|
||||
}
|
||||
}
|
||||
return string(result)
|
||||
}
|
||||
|
||||
func RandomName() string {
|
||||
return adjectives[mrand.Intn(len(adjectives))] + plants[mrand.Intn(len(plants))]
|
||||
}
|
||||
71
pkg/tool/string.go
Normal file
71
pkg/tool/string.go
Normal file
@@ -0,0 +1,71 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
func BytesToString(b []byte) string {
|
||||
return unsafe.String(unsafe.SliceData(b), len(b))
|
||||
}
|
||||
|
||||
func StringToBytes(s string) []byte {
|
||||
return unsafe.Slice(unsafe.StringData(s), len(s))
|
||||
}
|
||||
|
||||
func CopyString(s string) string {
|
||||
return string([]byte(s))
|
||||
}
|
||||
|
||||
// ToSnakeCase 将给定的字符串转换为 snake_case 风格。
|
||||
//
|
||||
// 参数:
|
||||
//
|
||||
// str: 待转换的字符串,只考虑 ASCII 字符。
|
||||
func ToSnakeCase(str string) string {
|
||||
if str == "" {
|
||||
return ""
|
||||
}
|
||||
|
||||
var (
|
||||
sb strings.Builder
|
||||
isLower = func(c byte) bool {
|
||||
return c >= 'a' && c <= 'z'
|
||||
}
|
||||
isUpper = func(c byte) bool {
|
||||
return c >= 'A' && c <= 'Z'
|
||||
}
|
||||
)
|
||||
|
||||
for i := 0; i < len(str); i++ {
|
||||
c := str[i]
|
||||
|
||||
var prev byte
|
||||
if i > 0 {
|
||||
prev = str[i-1]
|
||||
}
|
||||
|
||||
var next byte
|
||||
if i < len(str)-1 {
|
||||
next = str[i+1]
|
||||
}
|
||||
|
||||
if isUpper(c) && (isLower(prev) || isLower(next)) {
|
||||
sb.WriteRune('_')
|
||||
sb.WriteByte(c + ('a' - 'A'))
|
||||
} else if isUpper(c) {
|
||||
sb.WriteByte(c + ('a' - 'A'))
|
||||
} else {
|
||||
sb.WriteRune(rune(c))
|
||||
}
|
||||
}
|
||||
|
||||
// 去除首尾下划线
|
||||
return strings.Trim(sb.String(), "_")
|
||||
}
|
||||
|
||||
func PrettyJSON(v any) string {
|
||||
b, _ := json.MarshalIndent(v, "", " ")
|
||||
return string(b)
|
||||
}
|
||||
125
pkg/tool/table.go
Normal file
125
pkg/tool/table.go
Normal file
@@ -0,0 +1,125 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"os"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
"github.com/jedib0t/go-pretty/v6/table"
|
||||
)
|
||||
|
||||
func TablePrinter(data any, writers ...io.Writer) {
|
||||
var w io.Writer = os.Stdout
|
||||
if len(writers) > 0 && writers[0] != nil {
|
||||
w = writers[0]
|
||||
}
|
||||
|
||||
t := table.NewWriter()
|
||||
structPrinter(t, "", data)
|
||||
_, _ = fmt.Fprintln(w, t.Render())
|
||||
}
|
||||
|
||||
func structPrinter(w table.Writer, prefix string, item any) {
|
||||
Start:
|
||||
rv := reflect.ValueOf(item)
|
||||
if rv.IsZero() {
|
||||
return
|
||||
}
|
||||
|
||||
for rv.Type().Kind() == reflect.Pointer {
|
||||
rv = rv.Elem()
|
||||
}
|
||||
|
||||
switch rv.Type().Kind() {
|
||||
case reflect.Invalid,
|
||||
reflect.Uintptr,
|
||||
reflect.Chan,
|
||||
reflect.Func,
|
||||
reflect.UnsafePointer:
|
||||
case reflect.Bool,
|
||||
reflect.Int,
|
||||
reflect.Int8,
|
||||
reflect.Int16,
|
||||
reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint,
|
||||
reflect.Uint8,
|
||||
reflect.Uint16,
|
||||
reflect.Uint32,
|
||||
reflect.Uint64,
|
||||
reflect.Float32,
|
||||
reflect.Float64,
|
||||
reflect.Complex64,
|
||||
reflect.Complex128,
|
||||
reflect.Interface:
|
||||
w.AppendRow(table.Row{strings.TrimPrefix(prefix, "."), rv.Interface()})
|
||||
case reflect.String:
|
||||
val := rv.String()
|
||||
if len(val) <= 160 {
|
||||
w.AppendRow(table.Row{strings.TrimPrefix(prefix, "."), val})
|
||||
return
|
||||
}
|
||||
|
||||
w.AppendRow(table.Row{strings.TrimPrefix(prefix, "."), val[0:64] + "..." + val[len(val)-64:]})
|
||||
case reflect.Array, reflect.Slice:
|
||||
for i := 0; i < rv.Len(); i++ {
|
||||
p := strings.Join([]string{prefix, fmt.Sprintf("[%d]", i)}, ".")
|
||||
structPrinter(w, p, rv.Index(i).Interface())
|
||||
}
|
||||
case reflect.Map:
|
||||
for _, k := range rv.MapKeys() {
|
||||
structPrinter(w, fmt.Sprintf("%s.{%v}", prefix, k), rv.MapIndex(k).Interface())
|
||||
}
|
||||
case reflect.Pointer:
|
||||
goto Start
|
||||
case reflect.Struct:
|
||||
for i := 0; i < rv.NumField(); i++ {
|
||||
p := fmt.Sprintf("%s.%s", prefix, rv.Type().Field(i).Name)
|
||||
field := rv.Field(i)
|
||||
|
||||
//log.Debug("TablePrinter: prefix: %s, field: %v", p, rv.Field(i))
|
||||
|
||||
if !field.CanInterface() {
|
||||
return
|
||||
}
|
||||
|
||||
structPrinter(w, p, field.Interface())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TableMapPrinter(data []byte) {
|
||||
m := make(map[string]any)
|
||||
if err := json.Unmarshal(data, &m); err != nil {
|
||||
log.Printf("[E] unmarshal json err: %s", err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
t := table.NewWriter()
|
||||
addRow(t, "", m)
|
||||
fmt.Println(t.Render())
|
||||
}
|
||||
|
||||
func addRow(w table.Writer, prefix string, m any) {
|
||||
rv := reflect.ValueOf(m)
|
||||
switch rv.Type().Kind() {
|
||||
case reflect.Map:
|
||||
for _, k := range rv.MapKeys() {
|
||||
key := k.String()
|
||||
if prefix != "" {
|
||||
key = strings.Join([]string{prefix, k.String()}, ".")
|
||||
}
|
||||
addRow(w, key, rv.MapIndex(k).Interface())
|
||||
}
|
||||
case reflect.Slice, reflect.Array:
|
||||
for i := 0; i < rv.Len(); i++ {
|
||||
addRow(w, fmt.Sprintf("%s[%d]", prefix, i), rv.Index(i).Interface())
|
||||
}
|
||||
default:
|
||||
w.AppendRow(table.Row{prefix, m})
|
||||
}
|
||||
}
|
||||
128
pkg/tool/tls.go
Normal file
128
pkg/tool/tls.go
Normal file
@@ -0,0 +1,128 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"crypto/x509/pkix"
|
||||
"encoding/pem"
|
||||
"math/big"
|
||||
"net"
|
||||
"time"
|
||||
)
|
||||
|
||||
func GenerateTlsConfig(serverName ...string) (serverTLSConf *tls.Config, clientTLSConf *tls.Config, err error) {
|
||||
ca := &x509.Certificate{
|
||||
SerialNumber: big.NewInt(2019),
|
||||
Subject: pkix.Name{
|
||||
Organization: []string{"Company, INC."},
|
||||
Country: []string{"US"},
|
||||
Province: []string{""},
|
||||
Locality: []string{"San Francisco"},
|
||||
StreetAddress: []string{"Golden Gate Bridge"},
|
||||
PostalCode: []string{"94016"},
|
||||
},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().AddDate(99, 0, 0),
|
||||
IsCA: true,
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
|
||||
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageCertSign,
|
||||
BasicConstraintsValid: true,
|
||||
}
|
||||
// create our private and public key
|
||||
caPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
// create the CA
|
||||
caBytes, err := x509.CreateCertificate(rand.Reader, ca, ca, &caPrivKey.PublicKey, caPrivKey)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
// pem encode
|
||||
caPEM := new(bytes.Buffer)
|
||||
pem.Encode(caPEM, &pem.Block{
|
||||
Type: "CERTIFICATE",
|
||||
Bytes: caBytes,
|
||||
})
|
||||
caPrivKeyPEM := new(bytes.Buffer)
|
||||
pem.Encode(caPrivKeyPEM, &pem.Block{
|
||||
Type: "RSA PRIVATE KEY",
|
||||
Bytes: x509.MarshalPKCS1PrivateKey(caPrivKey),
|
||||
})
|
||||
|
||||
_serverName := ""
|
||||
if len(serverName) > 0 && serverName[0] != "" {
|
||||
_serverName = serverName[0]
|
||||
}
|
||||
// set up our server certificate
|
||||
cert := &x509.Certificate{
|
||||
SerialNumber: big.NewInt(2019),
|
||||
Subject: pkix.Name{
|
||||
Organization: []string{"Company, INC."},
|
||||
Country: []string{"US"},
|
||||
Province: []string{""},
|
||||
Locality: []string{"San Francisco"},
|
||||
StreetAddress: []string{"Golden Gate Bridge"},
|
||||
PostalCode: []string{"94016"},
|
||||
CommonName: _serverName,
|
||||
},
|
||||
IPAddresses: []net.IP{net.IPv4(127, 0, 0, 1), net.IPv6loopback},
|
||||
NotBefore: time.Now(),
|
||||
NotAfter: time.Now().AddDate(10, 0, 0),
|
||||
SubjectKeyId: []byte{1, 2, 3, 4, 6},
|
||||
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageClientAuth, x509.ExtKeyUsageServerAuth},
|
||||
KeyUsage: x509.KeyUsageDigitalSignature,
|
||||
}
|
||||
|
||||
// add DNS names to SAN if serverName is provided
|
||||
if _serverName != "" {
|
||||
cert.DNSNames = []string{_serverName}
|
||||
}
|
||||
certPrivKey, err := rsa.GenerateKey(rand.Reader, 4096)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
certBytes, err := x509.CreateCertificate(rand.Reader, cert, ca, &certPrivKey.PublicKey, caPrivKey)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
certPEM := new(bytes.Buffer)
|
||||
pem.Encode(certPEM, &pem.Block{
|
||||
Type: "CERTIFICATE",
|
||||
Bytes: certBytes,
|
||||
})
|
||||
certPrivKeyPEM := new(bytes.Buffer)
|
||||
pem.Encode(certPrivKeyPEM, &pem.Block{
|
||||
Type: "RSA PRIVATE KEY",
|
||||
Bytes: x509.MarshalPKCS1PrivateKey(certPrivKey),
|
||||
})
|
||||
serverCert, err := tls.X509KeyPair(certPEM.Bytes(), certPrivKeyPEM.Bytes())
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
serverTLSConf = &tls.Config{
|
||||
Certificates: []tls.Certificate{serverCert},
|
||||
}
|
||||
certpool := x509.NewCertPool()
|
||||
certpool.AppendCertsFromPEM(caPEM.Bytes())
|
||||
clientTLSConf = &tls.Config{
|
||||
RootCAs: certpool,
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func LoadTLSConfigFromFile(certFile, keyFile string) (*tls.Config, error) {
|
||||
cert, err := tls.LoadX509KeyPair(certFile, keyFile)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tlsConf := &tls.Config{
|
||||
Certificates: []tls.Certificate{cert},
|
||||
}
|
||||
|
||||
return tlsConf, nil
|
||||
}
|
||||
73
pkg/tool/tools.go
Normal file
73
pkg/tool/tools.go
Normal file
@@ -0,0 +1,73 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math"
|
||||
)
|
||||
|
||||
func Min[T ~int | ~uint | ~int8 | ~uint8 | ~int16 | ~uint16 | ~int32 | ~uint32 | ~int64 | ~uint64 | ~float32 | ~float64](a, b T) T {
|
||||
if a <= b {
|
||||
return a
|
||||
}
|
||||
|
||||
return b
|
||||
}
|
||||
|
||||
func Mins[T ~int | ~uint | ~int8 | ~uint8 | ~int16 | ~uint16 | ~int32 | ~uint32 | ~int64 | ~uint64 | ~float32 | ~float64](vals ...T) T {
|
||||
var val T
|
||||
|
||||
if len(vals) == 0 {
|
||||
return val
|
||||
}
|
||||
|
||||
val = vals[0]
|
||||
|
||||
for _, item := range vals[1:] {
|
||||
if item < val {
|
||||
val = item
|
||||
}
|
||||
}
|
||||
|
||||
return val
|
||||
}
|
||||
|
||||
func Max[T ~int | ~uint | ~int8 | ~uint8 | ~int16 | ~uint16 | ~int32 | ~uint32 | ~int64 | ~uint64 | ~float32 | ~float64](a, b T) T {
|
||||
if a >= b {
|
||||
return a
|
||||
}
|
||||
|
||||
return b
|
||||
}
|
||||
|
||||
func Maxs[T ~int | ~uint | ~int8 | ~uint8 | ~int16 | ~uint16 | ~int32 | ~uint32 | ~int64 | ~uint64 | ~float32 | ~float64](vals ...T) T {
|
||||
var val T
|
||||
|
||||
if len(vals) == 0 {
|
||||
return val
|
||||
}
|
||||
|
||||
for _, item := range vals {
|
||||
if item > val {
|
||||
val = item
|
||||
}
|
||||
}
|
||||
|
||||
return val
|
||||
}
|
||||
|
||||
func Sum[T ~int | ~uint | ~int8 | ~uint8 | ~int16 | ~uint16 | ~int32 | ~uint32 | ~int64 | ~uint64 | ~float32 | ~float64](vals ...T) T {
|
||||
var sum T = 0
|
||||
for i := range vals {
|
||||
sum += vals[i]
|
||||
}
|
||||
return sum
|
||||
}
|
||||
|
||||
func Percent(val, minVal, maxVal, minPercent, maxPercent float64) string {
|
||||
return fmt.Sprintf(
|
||||
"%d%%",
|
||||
int(math.Round(
|
||||
((val-minVal)/(maxVal-minVal)*(maxPercent-minPercent)+minPercent)*100,
|
||||
)),
|
||||
)
|
||||
}
|
||||
81
pkg/tool/tools_test.go
Normal file
81
pkg/tool/tools_test.go
Normal file
@@ -0,0 +1,81 @@
|
||||
package tool
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestPercent(t *testing.T) {
|
||||
type args struct {
|
||||
val float64
|
||||
minVal float64
|
||||
maxVal float64
|
||||
minPercent float64
|
||||
maxPercent float64
|
||||
}
|
||||
tests := []struct {
|
||||
name string
|
||||
args args
|
||||
want string
|
||||
}{
|
||||
{
|
||||
name: "case 1",
|
||||
args: args{
|
||||
val: 0.5,
|
||||
minVal: 0,
|
||||
maxVal: 1,
|
||||
minPercent: 0,
|
||||
maxPercent: 1,
|
||||
},
|
||||
want: "50%",
|
||||
},
|
||||
{
|
||||
name: "case 2",
|
||||
args: args{
|
||||
val: 0.3,
|
||||
minVal: 0.1,
|
||||
maxVal: 0.6,
|
||||
minPercent: 0,
|
||||
maxPercent: 1,
|
||||
},
|
||||
want: "40%",
|
||||
},
|
||||
{
|
||||
name: "case 3",
|
||||
args: args{
|
||||
val: 700,
|
||||
minVal: 700,
|
||||
maxVal: 766,
|
||||
minPercent: 0.1,
|
||||
maxPercent: 0.7,
|
||||
},
|
||||
want: "10%",
|
||||
},
|
||||
{
|
||||
name: "case 4",
|
||||
args: args{
|
||||
val: 766,
|
||||
minVal: 700,
|
||||
maxVal: 766,
|
||||
minPercent: 0.1,
|
||||
maxPercent: 0.7,
|
||||
},
|
||||
want: "70%",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
if got := Percent(tt.args.val, tt.args.minVal, tt.args.maxVal, tt.args.minPercent, tt.args.maxPercent); got != tt.want {
|
||||
t.Errorf("Percent() = %v, want %v", got, tt.want)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestCopyString(t *testing.T) {
|
||||
s1 := "hello"
|
||||
s2 := CopyString(s1)
|
||||
|
||||
if &s1 == &s2 {
|
||||
t.Errorf("CopyString fail")
|
||||
}
|
||||
|
||||
t.Log(s1, s2)
|
||||
}
|
||||
1
pkg/tool/tree.go
Normal file
1
pkg/tool/tree.go
Normal file
@@ -0,0 +1 @@
|
||||
package tool
|
||||
14
pkg/tool/uuid.go
Normal file
14
pkg/tool/uuid.go
Normal file
@@ -0,0 +1,14 @@
|
||||
package tool
|
||||
|
||||
import (
|
||||
"github.com/gofrs/uuid"
|
||||
)
|
||||
|
||||
func NewV4() (string, error) {
|
||||
uid, err := uuid.NewV4()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return uid.String(), nil
|
||||
}
|
||||
Reference in New Issue
Block a user