How to disclose AI use in remote sensing and environmental earth observation papers
A practical guide for researchers who use AI in remote sensing, geospatial analysis, and environmental earth observation papers and need clear disclosure text for journal submission.
AI disclosure for remote sensing papers
If you work in remote sensing, you probably use AI in more than one place.
You might use a large language model to clean prose. You might use a vision model to label land cover. You might use machine learning for downscaling, cloud masking, gap filling, or biomass prediction. You might also use an AI search assistant while writing the related work section.
That mix creates a disclosure problem.
Many researchers know that journals now ask about generative AI in writing. Fewer know how to describe AI used in the research pipeline itself. In remote sensing, that second part matters more. Your model choices, training data, prompts, post-processing steps, and human checks can change the result.
This guide shows you how to disclose AI use in a remote sensing or environmental earth observation paper without turning your methods section into a compliance maze. If you want a general primer first, read [What Are AI Usage Cards?](/what-are-ai-usage-cards/) and Why AI Transparency Matters in Research. If you need venue-level context, see [[AI Disclosure](/how-to-disclose-ai-use-for-neurips-icml-and-acl-submissions/) Policies by Major Journals](/ai-disclosure-policies-by-journal/).
What journals usually care about
For Elsevier journals, authors may use generative AI tools in the writing process with human oversight and disclosure. Elsevier says authors must review and edit the output, take responsibility for the manuscript, and insert a declaration for AI used in writing. Elsevier also says routine grammar, spelling, and punctuation checks do not need declaration. For AI used in the research process, Elsevier says authors should describe that use in the methods section when relevant. AI tools cannot be listed as authors. (elsevier.com)
That split matters for remote sensing papers.
A journal in this area such as Remote Sensing of Environment sits inside a field where machine learning and deep learning are already normal research methods. Elsevier describes the journal’s scope as including machine and deep learning for remote sensing data analysis, along with topics such as data fusion, time series analysis, calibration, hydrology, oceanography, cryosphere work, and atmospheric science. (shop.elsevier.com)
So the real question is not "Did you use AI?" The real question is "Where did AI enter the work, and what did it do?"
Separate writing help from research methods
Start with a simple rule.
Treat writing support and research support as two different disclosures.
If you used ChatGPT, Claude, Gemini, Copilot, or another assistant to rewrite sentences, summarize your own notes, or improve English, place that in the manuscript’s AI declaration or acknowledgments area if the journal asks for one. Elsevier gives sample wording for this kind of statement and places it near the end of the manuscript, above the references. (elsevier.com)
If you used AI or machine learning inside the study itself, describe that in Methods. In remote sensing, this often matters more than the writing disclosure because it affects reproducibility and interpretation. Elsevier’s policy says AI use in the research process should be declared and described in detail in the methods section when relevant. (elsevier.com)
That means a remote sensing paper may need both:
- a short writing declaration
- a detailed methods description
If you want a reusable format for that split, generate an [[AI Usage Card](/ai-disclosure-for-social-science-research/) at ai-cards.org](https://ai-cards.org/). You can attach the card as supplementary material, turn it into an appendix, or copy its text into your methods and acknowledgments sections.
What to disclose in a remote sensing paper
Remote sensing studies often combine many moving parts. A vague line like "AI was used to support analysis" tells editors and reviewers almost nothing.
Name the role AI played.
For example, say whether you used AI for land cover classification, object detection, image segmentation, super resolution, georegistration support, quality control, synthetic data generation, literature search, code assistance, or language editing.
Then state the tool or model.
That may mean a named model family, a software package, a hosted service, or your own custom architecture. If you fine tuned a model, say so. If you used a closed tool for writing help, name the tool and the task. If you used an open model for analysis, give the version, framework, and key settings.
Then describe the human checks.
In remote sensing, this part is where trust lives. Did you manually inspect labels? Compare AI outputs against ground truth? Reject hallucinated references from a writing assistant? Review code line by line? Check confusion matrices by biome or sensor type? Explain that.
You should also disclose data risks that AI can hide:
- training on weak labels
- label leakage across time or geography
- synthetic augmentation
- prompt driven classification steps
- undocumented preprocessing
- hidden vendor updates in hosted tools
These details fit well with the logic behind AI Usage Cards Examples and Templates and AI Transparency Requirements for Journal Submissions.
A simple disclosure framework for earth observation work
I like a five-part structure for remote sensing papers because it matches how these studies actually get built.
1. Task
State the exact task.
"AI assisted analysis" is too broad. "A U-Net model segmented surface water extent from Sentinel-2 imagery" is usable.
2. Tool
Name the system.
Include model name, version, API or package, and date accessed if it was a hosted service. Hosted AI products change. Dates help.
3. Data
State what went in.
Name sensors, spatial resolution, time range, ground truth source, annotation workflow, and any synthetic or augmented data.
4. Oversight
State what you checked.
Say who reviewed the output, what metrics you used, and what error analysis you ran.
5. Boundaries
State what the AI did not do.
This sounds small, but it stops readers from guessing. If a language model only helped polish the cover letter, say that. If it did not write the discussion or generate figures, say that too.
Example wording for common remote sensing scenarios
You can adapt the examples below to your paper. Then polish them with the generator on ai-cards.org.
Scenario 1: generative AI for writing support only
Use this when AI helped with wording but not with analysis.
\section*{Declaration of generative AI use}
During the preparation of this manuscript, the authors used ChatGPT to improve grammar and sentence clarity in selected sections of the text. The authors reviewed and edited all outputs manually and take full responsibility for the content of the manuscript. No AI tool was used to generate, analyze, or interpret the study data.This tracks Elsevier's author guidance on disclosing generative AI used in writing while keeping responsibility with the authors. (elsevier.com)
Scenario 2: machine learning as the research method
Use this when the study itself relies on AI or machine learning.
\subsection*{AI methods disclosure}
We trained a convolutional neural network to classify post-fire burn severity from Sentinel-2 surface reflectance imagery and field reference plots. The model architecture, training split, hyperparameters, and evaluation metrics are reported in the Methods section. Two authors manually reviewed misclassified samples and repeated the error analysis by ecoregion. No generative AI tool was used to draft the results or discussion.This format helps reviewers see that the AI method belongs in Methods, not in a one-line note at the end.
Scenario 3: both writing support and research support
This is common now.
\section*{AI use disclosure}
The authors used a large language model to improve grammar in the Introduction and Conclusion after the scientific content had been drafted by the authors. The authors reviewed and edited all suggested text manually.
The study also used machine learning methods for image segmentation of mangrove extent from multi-temporal Sentinel-1 and Sentinel-2 data. We report the model architecture, training data, preprocessing steps, validation design, and error analysis in the Methods section. The authors take full responsibility for all interpretations, figures, and conclusions.Scenario 4: AI assisted coding
A lot of labs now use coding assistants. That use deserves a short note when it shaped scripts used in analysis.
\subsection*{Code assistance disclosure}
The authors used GitHub Copilot to suggest portions of Python code for data preprocessing and visualization. All suggested code was inspected, modified where needed, and validated against expected outputs before use in the analysis pipeline.If you need tool-specific wording, see How to disclose Microsoft Copilot use in academic writing and How to Disclose ChatGPT Usage in Academic Papers.
Where remote sensing authors often get disclosure wrong
The most common mistake is to disclose only the chatbot.
That misses the real issue. In remote sensing, reviewers usually care more about the analytic model than about sentence polishing. If your paper uses deep learning for retrieval, classification, segmentation, or prediction, the method needs enough detail for another researcher to understand the setup and limits.
The second mistake is to treat a hosted model like stable software.
It may not be stable. If you used a vendor model through an API or web interface, record the service name, version if available, and access date. That gives readers a time stamp.
The third mistake is to hide human judgment.
You do not build trust by pretending the pipeline ran on its own. Say who labeled data, who screened outputs, who checked edge cases like cloud shadows, snow, mixed pixels, or sensor drift. A plain sentence does more than a polished disclaimer.
The fourth mistake is to forget figures and images.
Elsevier does not permit authors to use AI tools to create or alter images for publication in the writing policy it states for books and commissioned content, and its broader publishing ethics materials point authors to specific rules on AI in figures and artwork. If your workflow touched images beyond ordinary scientific processing, check the current journal rules before submission. (elsevier.com)
A practical workflow before submission
Keep this simple.
First, list every place AI touched the project. Writing, coding, labeling, modeling, search, visualization.
Next, separate manuscript help from scientific method.
Then write one sentence for each use:
- what tool you used
- what you used it for
- what you checked by hand
After that, decide where each sentence belongs. Writing support usually goes in the journal’s AI declaration. Research use goes in Methods. Code assistance may sit in Methods, Data and Code Availability, or an appendix, depending on how central it was.
Then generate an AI Usage Card at ai-cards.org and keep it with the submission files. This works well when coauthors need to agree on wording. It also helps when an editor asks a follow-up question after submission.
If you write in LaTeX, the card text can drop straight into your manuscript. Our LaTeX Tutorial for AI Usage Cards and How to Use AI Usage Cards in Overleaf show the mechanics.
A model paragraph you can adapt for a remote sensing journal
If you want one block that covers most cases, start here and edit hard.
\section*{AI use statement}
This study used AI in two limited ways. First, the authors used a generative AI tool to improve grammar and wording in selected manuscript sections after drafting the scientific content. The authors reviewed and revised all suggested text manually. Second, the research workflow used machine learning models for remote sensing analysis. The Methods section reports the model architecture, training data, preprocessing, validation design, and error analysis in full. The authors take responsibility for the final text, analyses, figures, and conclusions.Do not paste this in unchanged.
Editors can spot template text fast. Make it specific to your sensors, task, and checks.
The point is clarity, not confession
Researchers sometimes approach AI disclosure like a trap. They worry that one honest sentence will make the paper look weaker.
I think the opposite is true.
Clear disclosure tells editors that you know where your claims come from. In remote sensing, that matters because your workflow can get technical fast, and readers need a map. A good disclosure does not apologize for using AI. It shows control.
If you want a clean way to do that, generate your card at ai-cards.org. You can use it as a submission note, a supplement, an appendix, or plain text copied into your manuscript. That takes a messy internal workflow and turns it into something an editor can read in one sitting.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card