ColabFold - v1.5.2
+ 12Jun2023: New databases! UniRef30 updated to 2302 and PDB to 230517.
+ We now use PDB100 instead of PDB70 (see notes in the [main](https://colabfold.com) notebook).
+ 12Jun2023: We introduced a new default pairing strategy:
+ Previously, for multimer predictions with more than 2 chains,
+ we only pair if all sequences taxonomically match ("complete" pairing).
+ The new default "greedy" strategy pairs any taxonomically matching subsets.
For details of what was changed in v1.5, see change log!
![](https://github.com/sokrypton/ColabFold/raw/main/.github/ColabFold_Marv_Logo.png)
Making Protein folding accessible to all via Google Colab!
Notebooks
monomers
complexes
mmseqs2
jackhmmer
templates
AlphaFold2_mmseqs2
Yes
Yes
Yes
No
Yes
AlphaFold2_batch
Yes
Yes
Yes
No
Yes
AlphaFold2 (from Deepmind)
Yes
Yes
No
Yes
No
relax_amber (relax input structure)
ESMFold
Yes
Maybe
No
No
No
BETA (in development) notebooks
RoseTTAFold2
Yes
Yes
Yes
No
WIP
OmegaFold
Yes
Maybe
No
No
No
OLD retired notebooks
RoseTTAFold
Yes
No
Yes
No
No
AlphaFold2_advanced
Yes
Yes
Yes
Yes
No
AlphaFold2_complexes
No
Yes
No
No
No
AlphaFold2_jackhmmer
Yes
No
Yes
Yes
No
AlphaFold2_noTemplates_noMD
AlphaFold2_noTemplates_yesMD
FAQ
Where can I chat with other ColabFold users?
See our Discord channel!
Can I use the models for Molecular Replacement?
Yes, but be CAREFUL, the bfactor column is populated with pLDDT confidence values (higher = better). Phenix.phaser expects a "real" bfactor, where (lower = better). See post from Claudia Millán.
What is the maximum length?
Limits depends on free GPU provided by Google-Colab fingers-crossed
For GPU: Tesla T4 or Tesla P100 with ~16G the max length is ~2000
For GPU: Tesla K80 with ~12G the max length is ~1000
To check what GPU you got, open a new code cell and type !nvidia-smi
Is it okay to use the MMseqs2 MSA server (cf.run_mmseqs2) on a local computer?
You can access the server from a local computer if you queries are serial from a single IP. Please do not use multiple computers to query the server.
Where can I download the databases used by ColabFold?
The databases are available at colabfold.mmseqs.com
I want to render my own images of the predicted structures, how do I color by pLDDT?
In pymol for AlphaFold structures: spectrum b, red_yellow_green_cyan_blue, minimum=50, maximum=90
If you want to use AlphaFold Colours (credit: Konstantin Korotkov)
set_color n0, [0.051, 0.341, 0.827]
set_color n1, [0.416, 0.796, 0.945]
set_color n2, [0.996, 0.851, 0.212]
set_color n3, [0.992, 0.490, 0.302]
color n0, b 30Apr2023: Amber is working again in our ColabFold Notebook
29Apr2023: Amber is not working in our Notebook due to Colab update
18Feb2023: v1.5.2 - fixing: fixing memory leak for large proteins
- fixing: --use_dropout (random seed was not changing between recycles)
06Feb2023: v1.5.1 - fixing: --save-all/--save-recycles
04Feb2023: v1.5.0 - ColabFold updated to use AlphaFold v2.3.1!
03Jan2023: The MSA server's faulty hardware from 12/26 was replaced.
There were intermittent failures on 12/26 and 1/3. Currently,
there are no known issues. Let us know if you experience any.
10Oct2022: Bugfix: random_seed was not being used for alphafold-multimer.
Same structure was returned regardless of defined seed. This
has been fixed!
13Jul2022: We have set up a new ColabFold MSA server provided by Korean
Bioinformation Center. It provides accelerated MSA generation,
we updated the UniRef30 to 2022_02 and PDB/PDB70 to 220313.
11Mar2022: We use in default AlphaFold-multimer-v2 weights for complex modeling.
We also offer the old complex modes "AlphaFold-ptm" or "AlphaFold-multimer-v1"
04Mar2022: ColabFold now uses a much more powerful server for MSAs and searches through the ColabFoldDB instead of BFD/MGnify.
Please let us know if you observe any issues.
26Jan2022: AlphaFold2_mmseqs2, AlphaFold2_batch and colabfold_batch's multimer complexes predictions are
now in default reranked by iptmscore*0.8+ptmscore*0.2 instead of ptmscore
16Aug2021: WARNING - MMseqs2 API is undergoing upgrade, you may see error messages.
17Aug2021: If you see any errors, please report them.
17Aug2021: We are still debugging the MSA generation procedure...
20Aug2021: WARNING - MMseqs2 API is undergoing upgrade, you may see error messages.
To avoid Google Colab from crashing, for large MSA we did -diff 1000 to get
1K most diverse sequences. This caused some large MSA to degrade in quality,
as sequences close to query were being merged to single representive.
We are working on updating the server (today) to fix this, by making sure
that both diverse and sequences close to query are included in the final MSA.
We'll post update here when update is complete.
21Aug2021 The MSA issues should now be resolved! Please report any errors you see.
In short, to reduce MSA size we filter (qsc > 0.8, id > 0.95) and take 3K
most diverse sequences at different qid (sequence identity to query) intervals
and merge them. More specifically 3K sequences at qid at (0→0.2),(0.2→0.4),
(0.4→0.6),(0.6→0.8) and (0.8→1). If you submitted your sequence between
16Aug2021 and 20Aug2021, we recommend submitting again for best results!
21Aug2021 The use_templates option in AlphaFold2_mmseqs2 is not properly working. We are
working on fixing this. If you are not using templates, this does not affect the
the results. Other notebooks that do not use_templates are unaffected.
21Aug2021 The templates issue is resolved!
11Nov2021 [AlphaFold2_mmseqs2] now uses Alphafold-multimer for complex (homo/hetero-oligomer) modeling.
Use [AlphaFold2_advanced] notebook for the old complex prediction logic.
11Nov2021 ColabFold can be installed locally using pip!
14Nov2021 Template based predictions works again in the Alphafold2_mmseqs2 notebook.
14Nov2021 WARNING "Single-sequence" mode in AlphaFold2_mmseqs2 and AlphaFold2_batch was broken
starting 11Nov2021. The MMseqs2 MSA was being used regardless of selection.
14Nov2021 "Single-sequence" mode is now fixed.
20Nov2021 WARNING "AMBER" mode in AlphaFold2_mmseqs2 and AlphaFold2_batch was broken
starting 11Nov2021. Unrelaxed proteins were returned instead.
20Nov2021 "AMBER" is fixed thanks to Kevin Pan
|