Cislunar Space Beginner's GuideCislunar Space Beginner's Guide
Satellite Orbit Simulation
Cislunar Glossary
Resources & Tools
Blue Team Research
Space News
AI Q&A
Forum
Home
Gitee
GitHub
  • 简体中文
  • English
Satellite Orbit Simulation
Cislunar Glossary
Resources & Tools
Blue Team Research
Space News
AI Q&A
Forum
Home
Gitee
GitHub
  • 简体中文
  • English
  • Site map

    • Home (overview)
    • Intro · what is cislunar space
    • Orbits · spacecraft trajectories
    • Frontiers · directions & labs
    • Glossary · terms & definitions
    • Tools · data & code
    • News · space industry archive
    • Topic · blue-team research
  • Cislunar glossary (terms & definitions)

    • Cislunar Space Glossary
    • Dynamics models

      • Circular Restricted Three-Body Problem (CR3BP)
      • CR3BP with Low-Thrust (CR3BP-LT)
      • A2PPO (Attention-Augmented Proximal Policy Optimization)
      • Curriculum Learning
      • Low-Thrust Transfer MDP Formulation
      • Generalized Advantage Estimation (GAE)
      • Direct Collocation
      • Birkhoff-Gustavson Normal Form
      • Central Manifold
      • Action-Angle Variables
      • Poincaré Section
      • Clohessy-Wiltshire (CW) Equation
      • Patched Method (拼接法)
      • Continuation (延拓)
      • Differential Correction (微分修正)
      • Poincaré Map (庞加莱图)
      • Impulsive Maneuver (脉冲机动)
      • Zero-Velocity Surface
      • Hill Three-Body Problem
      • Bicircular Four-Body Problem
      • Quasi-Bicircular Four-Body Problem
      • Strobe Map
      • Stability Set
      • Backward Stability Set
      • Capture Set
      • /en/glossary/dynamics/batch-deployment.html
      • /en/glossary/dynamics/state-dependent-tsp.html
      • /en/glossary/dynamics/q-law.html
      • /en/glossary/dynamics/mass-discontinuity.html
      • /en/glossary/dynamics/equinoctial-elements.html
      • /en/glossary/dynamics/dynamic-programming.html
      • /en/glossary/dynamics/coasting-arc.html
    • Mission orbits

      • Distant Retrograde Orbit (DRO)
      • Near-Rectilinear Halo Orbit (NRHO)
      • Earth-Moon L1/L2 Halo Orbit (EML1/EML2 Halo)
      • DRO Constellation
      • Orbit Identification
      • Transfer Orbit (转移轨道)
      • Perilune (近月点)
      • Apolune (远月点)
      • Retrograde (逆行)
      • Prograde (顺行)
      • Parking Orbit (停泊轨道)
      • Free-Return Trajectory (自由返回轨道)
      • Halo Orbit (Halo 轨道)
      • Lissajous Orbit (Lissajous 轨道)
      • Lyapunov Orbit (Lyapunov 轨道)
      • Cycler Trajectory
      • Multi-Revolution Halo Orbit
      • Ballistic Capture Orbit
      • Low-Energy Transfer Orbit
      • Full Lunar Surface Coverage Orbit
      • /en/glossary/orbits/hub-and-spoke.html
    • Navigation

      • X-ray Pulsar Navigation
      • LiAISON Navigation
    • Lunar minerals

      • Changeite-Mg (Magnesium Changeite)
      • Changeite-Ce (Cerium Changeite)
    • Other

      • Starshade
      • Noncooperative Target
      • Spacecraft Intention Recognition
      • Chain-of-Thought (CoT) Prompting
      • Low-Rank Adaptation (LoRA)
      • Prompt Tuning (P-tuning)
      • Cislunar Space (地月空间)
      • Low Earth Orbit / LEO (低地球轨道)
      • Lunar Gravity Assist / LGA (月球借力)
      • Powered Lunar Flyby / PLF (有动力月球借力)
      • Weak Stability Boundary / WSB (弱稳定边界)
      • /en/glossary/other/libration-point.html
      • Orbit Insertion (入轨)
      • /en/glossary/other/orbital-residence-platform.html
    • Organizations

      • Anduril Industries
      • Booz Allen Hamilton
      • General Dynamics Mission Systems
      • GITAI USA
      • Lockheed Martin
      • Northrop Grumman
      • Quindar
      • Raytheon Missiles & Defense
      • Sci-Tec
      • SpaceX
      • True Anomaly
      • Turion Space
    • Military space doctrine

      • Space Superiority
      • Competitive Endurance
      • DOTMLPF-P Framework
      • Mission Command
      • Force Design
      • Force Development
      • Force Generation
      • Force Employment
      • Space Force Generation Process (SPAFORGEN)
      • Mission Delta (MD)
      • System Delta (SYD)
      • Space Mission Task Force (SMTF)
      • Commander, Space Forces (COMSPACEFOR)
      • Component Field Commands
      • Space Domain Awareness (SDA)
      • Counterspace Operations
      • Resilient/Disaggregated Architecture
      • Operational Test and Training Infrastructure (OTTI)
      • Golden Dome
    • Observation techniques

      • Image Stacking
      • Shift-and-Add (SAA)
      • Synthetic Tracking
      • Sidereal Tracking
      • Signal-to-Noise Ratio (SNR)
      • Astrometry
      • Source Extraction
      • Ephemeris Correlation
      • Cislunar Moving Objects
      • Lunar Glare Zone
      • Image Registration
      • Background Star Elimination
      • Segmentation Map
      • Hot Pixel
    • Satellite Communication & TT&C

      • BeiDou Satellite System
      • Inter-Satellite Link (ISL)
      • All-Time Seamless Communication
      • Constellation Networking
      • Microwave Link
      • Laser-Microwave Communication

Low-Rank Adaptation (LoRA)

Author: CislunarSpace

Site: https://cislunarspace.cn

Definition

Low-Rank Adaptation (LoRA) is a Parameter-Efficient Fine-Tuning (PEFT) method proposed by Hu et al. (2021). The core idea of LoRA is that the weight updates in a pretrained model can be effectively approximated by a low-rank matrix. By freezing the original pretrained weights and injecting a pair of trainable low-rank decomposition matrices into each Transformer layer, LoRA achieves performance comparable to full fine-tuning while training only 0.1%–3% of the original model parameters.

Mathematical Principle

Given a pretrained weight matrix Φ0∈Rd×k\Phi_0 \in \mathbb{R}^{d \times k}Φ0​∈Rd×k at some layer, LoRA decomposes the parameter update Δϕ\Delta\phiΔϕ into a product of two low-rank matrices:

Δϕ=AB\Delta\phi = AB Δϕ=AB

where A∈Rd×rA \in \mathbb{R}^{d \times r}A∈Rd×r, B∈Rr×kB \in \mathbb{R}^{r \times k}B∈Rr×k, and rank r≪min⁡(d,k)r \ll \min(d, k)r≪min(d,k).

The forward pass becomes:

Y=X(Φ0+Δϕ)=XΦ0+XABY = X(\Phi_0 + \Delta\phi) = X\Phi_0 + XAB Y=X(Φ0​+Δϕ)=XΦ0​+XAB

Since rrr is much smaller than ddd and kkk, the number of trainable parameters is dramatically reduced. For example, with d=k=4096d = k = 4096d=k=4096 and r=8r = 8r=8, the original layer has ~16.8M parameters, while LoRA requires training only ~65K parameters (~0.4%).

Training Process

LoRA training follows these steps:

  1. Freeze pretrained weights: All original parameters Φ0\Phi_0Φ0​ remain unchanged
  2. Inject low-rank matrices: Add trainable AAA and BBB matrices to each target layer (typically Q, K, V, O projection matrices in attention layers)
  3. Initialization: AAA is typically initialized with Gaussian random values, BBB is initialized to zero, ensuring Δϕ=AB=0\Delta\phi = AB = 0Δϕ=AB=0 at the start of training
  4. Training: Only AAA and BBB parameters are updated using standard gradient descent
  5. Inference merging: After training, merge ABABAB into the original weights: Φ=Φ0+AB\Phi = \Phi_0 + ABΦ=Φ0​+AB, introducing no additional inference latency

Comparison with Full Fine-Tuning

FeatureFull Fine-TuningLoRA
Trainable parameters100%0.1%–3%
Memory requirementsHighLow
Training speedSlowFast
Inference latencyNo additional delayNo additional delay (after merging)
Multi-task supportRequires multiple full model copiesDifferent low-rank matrices per task
PerformanceOptimalNear full fine-tuning

Comparison with P-tuning V2

Both LoRA and P-tuning V2 are parameter-efficient fine-tuning methods, but they differ in strategy:

FeatureLoRAP-tuning V2
Parameter modificationConstructs low-rank matrices externallyAdds soft prompts and embedding layers internally
Modification locationWeight matrices at each target layerVirtual prompts before input + embeddings at each layer
InferenceNo overhead after weight mergingRequires processing additional soft prompt tokens
Typical applicationChatGLM3-6B fine-tuningChatGLM2-6B fine-tuning

Application in Spacecraft Intention Recognition

In the study by Jing et al. (2025), LoRA was used to fine-tune the ChatGLM3-6B model for spacecraft intention recognition. The experiment used LoRA rank r=8r = 8r=8 and scaling factor 32, training for only ~3,000 iterations. Results showed:

  • The LoRA-fine-tuned ChatGLM3-6B achieved 99.90% accuracy under instruction prompts, the highest among all tested models
  • Accuracy improved by 83.94% compared to the base model
  • Robustness was close to the base model, with standard deviation increasing by only 1.25x

Related Concepts

  • Prompt Tuning (P-tuning)
  • Chain-of-Thought (CoT) Prompting
  • Spacecraft Intention Recognition

References

  • Hu E J, Shen Y, Wallis P, et al. LoRA: Low-rank adaptation of large language models. arXiv:2106.09685, 2021.
  • Jing H, Sun Q, Dang Z, Wang H. Intention Recognition of Space Noncooperative Targets Using Large Language Models. Space Sci. Technol. 2025;5:0271.
  • Ling C, Zhao X, Lu J, et al. Domain specialization as the key to make large language models disruptive: A comprehensive survey. arXiv:2305.18703, 2023.
Improve this page
Last Updated: 4/29/26, 11:30 AM
Contributors: Hermes Agent, Cron Job
Prev
Chain-of-Thought (CoT) Prompting
Next
Prompt Tuning (P-tuning)
地月空间入门指南
Cislunar Space Beginner's GuideYour guide to cislunar space
View on GitHub

Navigate

  • Home
  • About
  • Space News
  • Glossary

Content

  • Cislunar Orbits
  • Research
  • Resources
  • Blue Team

English

  • Home
  • About
  • Space News
  • Glossary

Follow Us

© 2026 Cislunar Space Beginner's Guide  |  湘ICP备2026006405号-1
Related:智慧学习助手 UStudy航天任务工具箱 ATK
支持我
鼓励和赞赏我感谢您的支持