Cislunar Space Beginner's GuideCislunar Space Beginner's Guide
Satellite Orbit Simulation
Cislunar Glossary
Resources & Tools
Blue Team Research
Space News
AI Q&A
Forum
Home
Gitee
GitHub
  • 简体中文
  • English
Satellite Orbit Simulation
Cislunar Glossary
Resources & Tools
Blue Team Research
Space News
AI Q&A
Forum
Home
Gitee
GitHub
  • 简体中文
  • English
  • Site map

    • Home (overview)
    • Intro · what is cislunar space
    • Orbits · spacecraft trajectories
    • Frontiers · directions & labs
    • Glossary · terms & definitions
    • Tools · data & code
    • News · space industry archive
    • Topic · blue-team research
  • Cislunar glossary (terms & definitions)

    • Cislunar Space Glossary
    • Dynamics models

      • Circular Restricted Three-Body Problem (CR3BP)
      • CR3BP with Low-Thrust (CR3BP-LT)
      • A2PPO (Attention-Augmented Proximal Policy Optimization)
      • Curriculum Learning
      • Low-Thrust Transfer MDP Formulation
      • Generalized Advantage Estimation (GAE)
      • Direct Collocation
      • Birkhoff-Gustavson Normal Form
      • Central Manifold
      • Action-Angle Variables
      • Poincaré Section
      • Clohessy-Wiltshire (CW) Equation
    • Mission orbits

      • Earth-Moon L1/L2 Halo Orbit (EML1/EML2 Halo)
      • Orbit Identification
    • Navigation

      • X-ray Pulsar Navigation
    • Lunar minerals

      • Changeite-Mg (Magnesium Changeite)
      • Changeite-Ce (Cerium Changeite)
    • Other

      • Starshade
      • Noncooperative Target
      • Spacecraft Intention Recognition
      • Chain-of-Thought (CoT) Prompting
      • Low-Rank Adaptation (LoRA)
      • Prompt Tuning (P-tuning)
    • Organizations

      • Anduril Industries
      • Booz Allen Hamilton
      • General Dynamics Mission Systems
      • GITAI USA
      • Lockheed Martin
      • Northrop Grumman
      • Quindar
      • Raytheon Missiles & Defense
      • Sci-Tec
      • SpaceX
      • True Anomaly
      • Turion Space

Prompt Tuning (P-tuning)

Author: CislunarSpace

Site: https://cislunarspace.cn

Definition

Prompt Tuning is a family of Parameter-Efficient Fine-Tuning (PEFT) techniques. The core idea is to prepend a set of learnable continuous vectors (called "soft prompts") to the model input, while freezing the original pretrained model weights. Only the soft prompt parameters are trained, allowing the model to adapt to different downstream tasks without modifying its own parameters.

P-tuning is an important variant of prompt tuning, proposed by Liu et al. P-tuning V2 (2021) is an improved version that achieves performance comparable to full fine-tuning across multiple scales and tasks.

P-tuning V2 Principle

The P-tuning V2 workflow is as follows:

  1. Input processing: Convert input text XXX through tokenization and embedding into a vector sequence {h1,h2,...,hn}\{h_1, h_2, ..., h_n\}{h1​,h2​,...,hn​}
  2. Add soft prompts: Prepend 128 learnable soft prompt tokens S1,S2,...,S128S_1, S_2, ..., S_{128}S1​,S2​,...,S128​ before the input vectors
  3. Layer-wise embeddings: Construct trainable embedding parameters corresponding to soft prompt tokens at each layer of the LLM
  4. Training: Only update soft prompt tokens and layer-wise embedding parameters; original model weights Φ0\Phi_0Φ0​ remain unchanged

The input template is:

Tinput={S1,S2,…,S128,h1,h2,…,hn}T_{\text{input}} = \{S_1, S_2, \ldots, S_{128}, h_1, h_2, \ldots, h_n\} Tinput​={S1​,S2​,…,S128​,h1​,h2​,…,hn​}

The final model parameters combine original and new parameters:

Φ=Φ0+Δϕ\Phi = \Phi_0 + \Delta\phi Φ=Φ0​+Δϕ

where Δϕ\Delta\phiΔϕ consists of the trained new parameters.

Soft Prompts vs. Hard Prompts

The "soft prompts" in prompt tuning are fundamentally different from "hard prompts" (natural language text prompts):

FeatureHard PromptSoft Prompt
FormNatural language textLearnable parameters in continuous vector space
OptimizationManual design or searchAutomatic optimization via gradient descent
ExpressivenessLimited to discrete tokens in vocabularyCan represent continuous semantics not in vocabulary
Use casesGeneral interaction, zero-shot inferenceEfficient task-specific adaptation

Comparison with Full Fine-Tuning and LoRA

FeatureFull Fine-TuningP-tuning V2LoRA
Trainable parameters100%<1%0.1%–3%
Modification locationAll layersInput layer + layer-wise embeddingsTarget layer weight matrices
Inference overheadNoneAdditional processing for soft prompt tokensNone (after merging)
Typical modelAnyChatGLM2-6BChatGLM3-6B

Application in Spacecraft Intention Recognition

In the study by Jing et al. (2025), P-tuning V2 was used to fine-tune the ChatGLM2-6B model. Training used 128 soft prompt tokens, learning rate 0.02, max input length 256 tokens, and max output length 128 tokens. Results showed:

  • The P-tuning V2-fine-tuned ChatGLM2-6B achieved 99.81% accuracy under CoT prompts
  • Accuracy improved significantly compared to the base model
  • The CoT-prompt-fine-tuned model showed the best robustness in perturbation tests, with standard deviation close to the base model

Related Concepts

  • Low-Rank Adaptation (LoRA)
  • Chain-of-Thought (CoT) Prompting
  • Spacecraft Intention Recognition

References

  • Liu X, Ji K, Fu Y, et al. P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks. arXiv:2110.07602, 2021.
  • Liu P, Yuan W, Fu J, et al. Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput Surv. 2023;55(9):1-35.
  • Jing H, Sun Q, Dang Z, Wang H. Intention Recognition of Space Noncooperative Targets Using Large Language Models. Space Sci. Technol. 2025;5:0271.
Improve this page
Last Updated: 4/27/26, 10:22 AM
Contributors: Hermes Agent
Prev
Low-Rank Adaptation (LoRA)
地月空间入门指南
Cislunar Space Beginner's GuideYour guide to cislunar space
View on GitHub

Navigate

  • Home
  • About
  • Space News
  • Glossary

Content

  • Cislunar Orbits
  • Research
  • Resources
  • Blue Team

English

  • Home
  • About
  • Space News
  • Glossary

Follow Us

© 2026 Cislunar Space Beginner's Guide  |  湘ICP备2026006405号-1
Related:智慧学习助手 UStudy航天任务工具箱 ATK
支持我
鼓励和赞赏我感谢您的支持