Cislunar Space Beginner's GuideCislunar Space Beginner's Guide
Satellite Orbit Simulation
Cislunar Glossary
Resources & Tools
Blue Team Research
Space News
AI Q&A
Forum
Home
Gitee
GitHub
  • 简体中文
  • English
Satellite Orbit Simulation
Cislunar Glossary
Resources & Tools
Blue Team Research
Space News
AI Q&A
Forum
Home
Gitee
GitHub
  • 简体中文
  • English
  • Site map

    • Home (overview)
    • Intro · what is cislunar space
    • Orbits · spacecraft trajectories
    • Frontiers · directions & labs
    • Glossary · terms & definitions
    • Tools · data & code
    • News · space industry archive
    • Topic · blue-team research
  • Cislunar glossary (terms & definitions)

    • Cislunar Space Glossary
    • Dynamics models

      • Circular Restricted Three-Body Problem (CR3BP)
      • CR3BP with Low-Thrust (CR3BP-LT)
      • A2PPO (Attention-Augmented Proximal Policy Optimization)
      • Curriculum Learning
      • Low-Thrust Transfer MDP Formulation
      • Generalized Advantage Estimation (GAE)
      • Direct Collocation
      • Birkhoff-Gustavson Normal Form
      • Central Manifold
      • Action-Angle Variables
      • Poincaré Section
      • Clohessy-Wiltshire (CW) Equation
      • Patched Method (拼接法)
      • Continuation (延拓)
      • Differential Correction (微分修正)
      • Poincaré Map (庞加莱图)
      • Impulsive Maneuver (脉冲机动)
      • Zero-Velocity Surface
      • Hill Three-Body Problem
      • Bicircular Four-Body Problem
      • Quasi-Bicircular Four-Body Problem
      • Strobe Map
      • Stability Set
      • Backward Stability Set
      • Capture Set
      • /en/glossary/dynamics/batch-deployment.html
      • /en/glossary/dynamics/state-dependent-tsp.html
      • /en/glossary/dynamics/q-law.html
      • /en/glossary/dynamics/mass-discontinuity.html
      • /en/glossary/dynamics/equinoctial-elements.html
      • /en/glossary/dynamics/dynamic-programming.html
      • /en/glossary/dynamics/coasting-arc.html
    • Mission orbits

      • Distant Retrograde Orbit (DRO)
      • Near-Rectilinear Halo Orbit (NRHO)
      • Earth-Moon L1/L2 Halo Orbit (EML1/EML2 Halo)
      • DRO Constellation
      • Orbit Identification
      • Transfer Orbit (转移轨道)
      • Perilune (近月点)
      • Apolune (远月点)
      • Retrograde (逆行)
      • Prograde (顺行)
      • Parking Orbit (停泊轨道)
      • Free-Return Trajectory (自由返回轨道)
      • Halo Orbit (Halo 轨道)
      • Lissajous Orbit (Lissajous 轨道)
      • Lyapunov Orbit (Lyapunov 轨道)
      • Cycler Trajectory
      • Multi-Revolution Halo Orbit
      • Ballistic Capture Orbit
      • Low-Energy Transfer Orbit
      • Full Lunar Surface Coverage Orbit
      • /en/glossary/orbits/hub-and-spoke.html
    • Navigation

      • X-ray Pulsar Navigation
      • LiAISON Navigation
    • Lunar minerals

      • Changeite-Mg (Magnesium Changeite)
      • Changeite-Ce (Cerium Changeite)
    • Other

      • Starshade
      • Noncooperative Target
      • Spacecraft Intention Recognition
      • Chain-of-Thought (CoT) Prompting
      • Low-Rank Adaptation (LoRA)
      • Prompt Tuning (P-tuning)
      • Cislunar Space (地月空间)
      • Low Earth Orbit / LEO (低地球轨道)
      • Lunar Gravity Assist / LGA (月球借力)
      • Powered Lunar Flyby / PLF (有动力月球借力)
      • Weak Stability Boundary / WSB (弱稳定边界)
      • /en/glossary/other/libration-point.html
      • Orbit Insertion (入轨)
      • /en/glossary/other/orbital-residence-platform.html
    • Organizations

      • Anduril Industries
      • Booz Allen Hamilton
      • General Dynamics Mission Systems
      • GITAI USA
      • Lockheed Martin
      • Northrop Grumman
      • Quindar
      • Raytheon Missiles & Defense
      • Sci-Tec
      • SpaceX
      • True Anomaly
      • Turion Space
    • Military space doctrine

      • Space Superiority
      • Competitive Endurance
      • DOTMLPF-P Framework
      • Mission Command
      • Force Design
      • Force Development
      • Force Generation
      • Force Employment
      • Space Force Generation Process (SPAFORGEN)
      • Mission Delta (MD)
      • System Delta (SYD)
      • Space Mission Task Force (SMTF)
      • Commander, Space Forces (COMSPACEFOR)
      • Component Field Commands
      • Space Domain Awareness (SDA)
      • Counterspace Operations
      • Resilient/Disaggregated Architecture
      • Operational Test and Training Infrastructure (OTTI)
      • Golden Dome
    • Observation techniques

      • Image Stacking
      • Shift-and-Add (SAA)
      • Synthetic Tracking
      • Sidereal Tracking
      • Signal-to-Noise Ratio (SNR)
      • Astrometry
      • Source Extraction
      • Ephemeris Correlation
      • Cislunar Moving Objects
      • Lunar Glare Zone
      • Image Registration
      • Background Star Elimination
      • Segmentation Map
      • Hot Pixel
    • Satellite Communication & TT&C

      • BeiDou Satellite System
      • Inter-Satellite Link (ISL)
      • All-Time Seamless Communication
      • Constellation Networking
      • Microwave Link
      • Laser-Microwave Communication

Prompt Tuning (P-tuning)

Author: CislunarSpace

Site: https://cislunarspace.cn

Definition

Prompt Tuning is a family of Parameter-Efficient Fine-Tuning (PEFT) techniques. The core idea is to prepend a set of learnable continuous vectors (called "soft prompts") to the model input, while freezing the original pretrained model weights. Only the soft prompt parameters are trained, allowing the model to adapt to different downstream tasks without modifying its own parameters.

P-tuning is an important variant of prompt tuning, proposed by Liu et al. P-tuning V2 (2021) is an improved version that achieves performance comparable to full fine-tuning across multiple scales and tasks.

P-tuning V2 Principle

The P-tuning V2 workflow is as follows:

  1. Input processing: Convert input text XXX through tokenization and embedding into a vector sequence {h1,h2,...,hn}\{h_1, h_2, ..., h_n\}{h1​,h2​,...,hn​}
  2. Add soft prompts: Prepend 128 learnable soft prompt tokens S1,S2,...,S128S_1, S_2, ..., S_{128}S1​,S2​,...,S128​ before the input vectors
  3. Layer-wise embeddings: Construct trainable embedding parameters corresponding to soft prompt tokens at each layer of the LLM
  4. Training: Only update soft prompt tokens and layer-wise embedding parameters; original model weights Φ0\Phi_0Φ0​ remain unchanged

The input template is:

Tinput={S1,S2,…,S128,h1,h2,…,hn}T_{\text{input}} = \{S_1, S_2, \ldots, S_{128}, h_1, h_2, \ldots, h_n\} Tinput​={S1​,S2​,…,S128​,h1​,h2​,…,hn​}

The final model parameters combine original and new parameters:

Φ=Φ0+Δϕ\Phi = \Phi_0 + \Delta\phi Φ=Φ0​+Δϕ

where Δϕ\Delta\phiΔϕ consists of the trained new parameters.

Soft Prompts vs. Hard Prompts

The "soft prompts" in prompt tuning are fundamentally different from "hard prompts" (natural language text prompts):

FeatureHard PromptSoft Prompt
FormNatural language textLearnable parameters in continuous vector space
OptimizationManual design or searchAutomatic optimization via gradient descent
ExpressivenessLimited to discrete tokens in vocabularyCan represent continuous semantics not in vocabulary
Use casesGeneral interaction, zero-shot inferenceEfficient task-specific adaptation

Comparison with Full Fine-Tuning and LoRA

FeatureFull Fine-TuningP-tuning V2LoRA
Trainable parameters100%<1%0.1%–3%
Modification locationAll layersInput layer + layer-wise embeddingsTarget layer weight matrices
Inference overheadNoneAdditional processing for soft prompt tokensNone (after merging)
Typical modelAnyChatGLM2-6BChatGLM3-6B

Application in Spacecraft Intention Recognition

In the study by Jing et al. (2025), P-tuning V2 was used to fine-tune the ChatGLM2-6B model. Training used 128 soft prompt tokens, learning rate 0.02, max input length 256 tokens, and max output length 128 tokens. Results showed:

  • The P-tuning V2-fine-tuned ChatGLM2-6B achieved 99.81% accuracy under CoT prompts
  • Accuracy improved significantly compared to the base model
  • The CoT-prompt-fine-tuned model showed the best robustness in perturbation tests, with standard deviation close to the base model

Related Concepts

  • Low-Rank Adaptation (LoRA)
  • Chain-of-Thought (CoT) Prompting
  • Spacecraft Intention Recognition

References

  • Liu X, Ji K, Fu Y, et al. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv:2110.07602, 2021.
  • Liu P, Yuan W, Fu J, et al. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput Surv. 2023;55(9):1-35.
  • Jing H, Sun Q, Dang Z, Wang H. Intention Recognition of Space Noncooperative Targets Using Large Language Models. Space Sci. Technol. 2025;5:0271.
Improve this page
Last Updated: 4/29/26, 11:30 AM
Contributors: Hermes Agent, Cron Job
Prev
Low-Rank Adaptation (LoRA)
Next
Cislunar Space (地月空间)
地月空间入门指南
Cislunar Space Beginner's GuideYour guide to cislunar space
View on GitHub

Navigate

  • Home
  • About
  • Space News
  • Glossary

Content

  • Cislunar Orbits
  • Research
  • Resources
  • Blue Team

English

  • Home
  • About
  • Space News
  • Glossary

Follow Us

© 2026 Cislunar Space Beginner's Guide  |  湘ICP备2026006405号-1
Related:智慧学习助手 UStudy航天任务工具箱 ATK
支持我
鼓励和赞赏我感谢您的支持