Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BigBirdPegasus #10991

Merged
Merged
Changes from 44 commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
84b8ab3
init bigbird pegasus
vasudevgupta7 Mar 30, 2021
5c0307b
add debugging nb ; update config
vasudevgupta7 Apr 2, 2021
ee47797
init conversion
vasudevgupta7 Apr 2, 2021
677b72a
update conversion script
vasudevgupta7 Apr 4, 2021
94861d6
complete conversion script
vasudevgupta7 Apr 5, 2021
7529516
init forward()
vasudevgupta7 Apr 7, 2021
2ce502b
complete forward()
vasudevgupta7 Apr 10, 2021
72807a7
add tokenizer
vasudevgupta7 Apr 10, 2021
0f00e43
add some slow tests
vasudevgupta7 Apr 12, 2021
3bee313
commit current
vasudevgupta7 Apr 13, 2021
2f72512
fix copies
vasudevgupta7 Apr 13, 2021
af47e3c
add docs
vasudevgupta7 Apr 13, 2021
bbe710f
add conversion script for bigbird-roberta-summarization
vasudevgupta7 Apr 13, 2021
7b1ecce
remove TODO
vasudevgupta7 Apr 16, 2021
1de6954
small fixups
vasudevgupta7 Apr 17, 2021
fb10be0
correct tokenizer
patrickvonplaten Apr 19, 2021
c78bf6f
add bigbird core for now
vasudevgupta7 Apr 22, 2021
38c7638
fix config
vasudevgupta7 Apr 23, 2021
0b87a70
fix more
vasudevgupta7 Apr 30, 2021
c403796
revert pegasus-tokenizer back
vasudevgupta7 Apr 30, 2021
30a586a
make style
vasudevgupta7 Apr 30, 2021
9618f8b
everything working for pubmed; yayygit status
vasudevgupta7 Apr 30, 2021
53d99b5
complete tests finally
vasudevgupta7 Apr 30, 2021
a83e681
remove bigbird pegasus tok
patrickvonplaten May 4, 2021
789d170
correct tokenizer
patrickvonplaten May 4, 2021
324cb89
correct tests
patrickvonplaten May 4, 2021
24f7931
add tokenizer files
patrickvonplaten May 4, 2021
94326e1
merge conflicts
patrickvonplaten May 4, 2021
0657b81
finish make style
patrickvonplaten May 4, 2021
0aaddf4
fix test
patrickvonplaten May 4, 2021
0c11023
update
patrickvonplaten May 4, 2021
2b23619
make style
patrickvonplaten May 4, 2021
53d3a45
fix tok utils base file
patrickvonplaten May 4, 2021
2d64dda
make fix-copies
patrickvonplaten May 4, 2021
5157815
clean a bit
vasudevgupta7 May 4, 2021
cd2a5b6
small update
vasudevgupta7 May 4, 2021
237c118
fix some suggestions
vasudevgupta7 May 4, 2021
2d217e3
add to readme
vasudevgupta7 May 4, 2021
11d9c2e
fix a bit, clean tests
vasudevgupta7 May 5, 2021
b556fc9
fix more tests
vasudevgupta7 May 5, 2021
a066db7
Update src/transformers/__init__.py
patrickvonplaten May 6, 2021
9c0b312
Update src/transformers/__init__.py
patrickvonplaten May 6, 2021
4d706f7
make fix-copies
patrickvonplaten May 6, 2021
9a52751
complete attn switching, auto-padding left
vasudevgupta7 May 6, 2021
b13e76a
make style
patrickvonplaten May 6, 2021
97e1f71
fix auto-padding test
vasudevgupta7 May 6, 2021
0ba203a
make style
vasudevgupta7 May 6, 2021
0a85bb2
fix batched attention tests
patrickvonplaten May 6, 2021
89e9d5c
Merge branch 'add_bigbird_pegasus' of https://github.com/vasudevgupta…
patrickvonplaten May 6, 2021
7d2cae9
put tolerance at 1e-1 for stand-alone decoder test
patrickvonplaten May 6, 2021
34c3a63
fix docs
patrickvonplaten May 6, 2021
c229de1
fix tests
patrickvonplaten May 6, 2021
1d8d60d
correct slow tokenizer conversion
patrickvonplaten May 6, 2021
dfb74ea
Apply suggestions from code review
vasudevgupta7 May 6, 2021
ca77ad3
complete remaining suggestions
vasudevgupta7 May 6, 2021
2eb7563
fix test
patrickvonplaten May 7, 2021
File filter
Filter file types
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.

Always

Just for now

@@ -195,6 +195,7 @@ Current number of checkpoints: ![](https://img.shields.io/endpoint?url=https://h
1. **[BERT](https://huggingface.co/transformers/model_doc/bert.html)** (from Google) released with the paper [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/abs/1810.04805) by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.
1. **[BERT For Sequence Generation](https://huggingface.co/transformers/model_doc/bertgeneration.html)** (from Google) released with the paper [Leveraging Pre-trained Checkpoints for Sequence Generation Tasks](https://arxiv.org/abs/1907.12461) by Sascha Rothe, Shashi Narayan, Aliaksei Severyn.
1. **[BigBird-RoBERTa](https://huggingface.co/transformers/model_doc/bigbird.html)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
1. **[BigBird-Pegasus](https://huggingface.co/transformers/model_doc/bigbird_pegasus.html)** (from Google Research) released with the paper [Big Bird: Transformers for Longer Sequences](https://arxiv.org/abs/2007.14062) by Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, Amr Ahmed.
1. **[Blenderbot](https://huggingface.co/transformers/model_doc/blenderbot.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
1. **[BlenderbotSmall](https://huggingface.co/transformers/model_doc/blenderbot_small.html)** (from Facebook) released with the paper [Recipes for building an open-domain chatbot](https://arxiv.org/abs/2004.13637) by Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston.
1. **[BORT](https://huggingface.co/transformers/model_doc/bort.html)** (from Alexa) released with the paper [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499) by Adrian de Wynter and Daniel J. Perry.

Large diffs are not rendered by default.

@@ -0,0 +1,98 @@
..
Copyright 2021 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
BigBirdPegasus
-----------------------------------------------------------------------------------------------------------------------

Overview
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The BigBird model was proposed in `Big Bird: Transformers for Longer Sequences <https://arxiv.org/abs/2007.14062>`__ by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.

The abstract from the paper is the following:

*Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.*

Tips:

- For an in-detail explanation on how BigBird's attention works, see `this blog post
<https://huggingface.co/blog/big-bird>`__.
- BigBird comes with 2 implementations: **original_full** & **block_sparse**. For the sequence length < 1024, using
**original_full** is advised as there is no benefit in using **block_sparse** attention.
- The code currently uses window size of 3 blocks and 2 global blocks.
- Sequence length must be divisible by block size.
- Current implementation supports only **ITC**.
- Current implementation doesn't support **num_random_blocks = 0**.
- BigBirdPegasus uses the `PegasusTokenizer
<https://github.com/huggingface/transformers/blob/master/src/transformers/models/pegasus/tokenization_pegasus.py>`__.

The original code can be found `here <https://github.com/google-research/bigbird>`__.

BigBirdPegasusConfig
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.BigBirdPegasusConfig
:members:


BigBirdPegasusModel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.BigBirdPegasusModel
:members: forward


BigBirdPegasusForConditionalGeneration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.BigBirdPegasusForConditionalGeneration
:members: forward


BigBirdPegasusForSequenceClassification
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.BigBirdPegasusForSequenceClassification
:members: forward


BigBirdPegasusForQuestionAnswering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.BigBirdPegasusForQuestionAnswering
:members: forward


BigBirdPegasusForCausalLM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: transformers.BigBirdPegasusForCausalLM
:members: forward


@@ -155,6 +155,10 @@
"models.bert_japanese": ["BertJapaneseTokenizer", "CharacterTokenizer", "MecabTokenizer"],
"models.bertweet": ["BertweetTokenizer"],
"models.big_bird": ["BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP", "BigBirdConfig", "BigBirdTokenizer"],
"models.bigbird_pegasus": [
"BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP",
"BigBirdPegasusConfig",
],
"models.blenderbot": ["BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP", "BlenderbotConfig", "BlenderbotTokenizer"],
"models.blenderbot_small": [
"BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP",
@@ -543,6 +547,16 @@
"load_tf_weights_in_big_bird",
]
)
_import_structure["models.bigbird_pegasus"].extend(
[
"BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST",
"BigBirdPegasusForCausalLM",
"BigBirdPegasusForConditionalGeneration",
"BigBirdPegasusForQuestionAnswering",
"BigBirdPegasusForSequenceClassification",
"BigBirdPegasusModel",
]
)
_import_structure["models.blenderbot"].extend(
[
"BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST",
@@ -1519,6 +1533,7 @@
from .models.bert_japanese import BertJapaneseTokenizer, CharacterTokenizer, MecabTokenizer
from .models.bertweet import BertweetTokenizer
from .models.big_bird import BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdConfig, BigBirdTokenizer
from .models.bigbird_pegasus import BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdPegasusConfig
from .models.blenderbot import BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP, BlenderbotConfig, BlenderbotTokenizer
from .models.blenderbot_small import (
BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP,
@@ -1672,6 +1687,7 @@
from .models.bart import BartTokenizerFast
from .models.barthez import BarthezTokenizerFast
from .models.bert import BertTokenizerFast
from .models.bigbird_pegasus import BigBirdPegasusTokenizerFast
from .models.camembert import CamembertTokenizerFast
from .models.convbert import ConvBertTokenizerFast
from .models.deberta import DebertaTokenizerFast
@@ -1863,6 +1879,14 @@
BigBirdPreTrainedModel,
load_tf_weights_in_big_bird,
)
from .models.bigbird_pegasus import (
BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST,
BigBirdPegasusForCausalLM,
BigBirdPegasusForConditionalGeneration,
BigBirdPegasusForQuestionAnswering,
BigBirdPegasusForSequenceClassification,
BigBirdPegasusModel,
)
from .models.blenderbot import (
BLENDERBOT_PRETRAINED_MODEL_ARCHIVE_LIST,
BlenderbotForCausalLM,
@@ -635,9 +635,14 @@ def vocab(self, proto):
vocab = [
(self.original_tokenizer.pad_token, 0.0),
(self.original_tokenizer.eos_token, 0.0),
(self.original_tokenizer.mask_token_sent, 0.0),
(self.original_tokenizer.mask_token, 0.0),
]

if self.original_tokenizer.mask_token_sent is not None:
vocab += [
(self.original_tokenizer.mask_token_sent, 0.0),
(self.original_tokenizer.mask_token, 0.0),
This conversation was marked as resolved by patrickvonplaten

This comment has been minimized.

@sgugger

sgugger May 6, 2021
Member Outdated

Should we check for self.original_tokenizer.mask_token separately?

This comment has been minimized.

@patrickvonplaten

patrickvonplaten May 6, 2021
Member Outdated

Good point! I'll make this change

]

vocab += [(f"<unk_{i}>", -100.0) for i in range(2, self.original_tokenizer.offset)]
vocab += [(piece.piece, piece.score) for piece in proto.pieces[2:]]
return vocab
@@ -26,6 +26,7 @@
bert_japanese,
bertweet,
big_bird,
bigbird_pegasus,
blenderbot,
blenderbot_small,
camembert,
@@ -23,6 +23,10 @@
from ..bert.configuration_bert import BERT_PRETRAINED_CONFIG_ARCHIVE_MAP, BertConfig
from ..bert_generation.configuration_bert_generation import BertGenerationConfig
from ..big_bird.configuration_big_bird import BIG_BIRD_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdConfig
from ..bigbird_pegasus.configuration_bigbird_pegasus import (
BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP,
BigBirdPegasusConfig,
)
from ..blenderbot.configuration_blenderbot import BLENDERBOT_PRETRAINED_CONFIG_ARCHIVE_MAP, BlenderbotConfig
from ..blenderbot_small.configuration_blenderbot_small import (
BLENDERBOT_SMALL_PRETRAINED_CONFIG_ARCHIVE_MAP,
@@ -86,6 +90,7 @@
(key, value)
for pretrained_map in [
# Add archive maps here
BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP,
DEIT_PRETRAINED_CONFIG_ARCHIVE_MAP,
LUKE_PRETRAINED_CONFIG_ARCHIVE_MAP,
GPT_NEO_PRETRAINED_CONFIG_ARCHIVE_MAP,
@@ -139,6 +144,7 @@
CONFIG_MAPPING = OrderedDict(
[
# Add configs here
("bigbird_pegasus", BigBirdPegasusConfig),
("deit", DeiTConfig),
("luke", LukeConfig),
("gpt_neo", GPTNeoConfig),
@@ -198,6 +204,7 @@
MODEL_NAMES_MAPPING = OrderedDict(
[
# Add full (and cased) model names here
("bigbird_pegasus", "BigBirdPegasus"),
("deit", "DeiT"),
("luke", "LUKE"),
("gpt_neo", "GPT Neo"),
@@ -59,6 +59,13 @@
BigBirdForTokenClassification,
BigBirdModel,
)
from ..bigbird_pegasus.modeling_bigbird_pegasus import (
BigBirdPegasusForCausalLM,
BigBirdPegasusForConditionalGeneration,
BigBirdPegasusForQuestionAnswering,
BigBirdPegasusForSequenceClassification,
BigBirdPegasusModel,
)
from ..blenderbot.modeling_blenderbot import BlenderbotForCausalLM, BlenderbotForConditionalGeneration, BlenderbotModel
from ..blenderbot_small.modeling_blenderbot_small import (
BlenderbotSmallForCausalLM,
@@ -288,6 +295,7 @@
BertConfig,
BertGenerationConfig,
BigBirdConfig,
BigBirdPegasusConfig,
BlenderbotConfig,
BlenderbotSmallConfig,
CamembertConfig,
@@ -344,6 +352,7 @@
MODEL_MAPPING = OrderedDict(
[
# Base model mapping
(BigBirdPegasusConfig, BigBirdPegasusModel),
(DeiTConfig, DeiTModel),
(LukeConfig, LukeModel),
(GPTNeoConfig, GPTNeoModel),
@@ -439,6 +448,7 @@
MODEL_WITH_LM_HEAD_MAPPING = OrderedDict(
[
# Model with LM heads mapping
(BigBirdPegasusConfig, BigBirdPegasusForConditionalGeneration),
(GPTNeoConfig, GPTNeoForCausalLM),
(BigBirdConfig, BigBirdForMaskedLM),
(Speech2TextConfig, Speech2TextForConditionalGeneration),
@@ -485,6 +495,7 @@
MODEL_FOR_CAUSAL_LM_MAPPING = OrderedDict(
[
# Model for Causal LM mapping
(BigBirdPegasusConfig, BigBirdPegasusForCausalLM),
(GPTNeoConfig, GPTNeoForCausalLM),
(BigBirdConfig, BigBirdForCausalLM),
(CamembertConfig, CamembertForCausalLM),
@@ -557,6 +568,7 @@
MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING = OrderedDict(
[
# Model for Seq2Seq Causal LM mapping
(BigBirdPegasusConfig, BigBirdPegasusForConditionalGeneration),
(M2M100Config, M2M100ForConditionalGeneration),
(LEDConfig, LEDForConditionalGeneration),
(BlenderbotSmallConfig, BlenderbotSmallForConditionalGeneration),
@@ -577,6 +589,7 @@
MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING = OrderedDict(
[
# Model for Sequence Classification mapping
(BigBirdPegasusConfig, BigBirdPegasusForSequenceClassification),
(BigBirdConfig, BigBirdForSequenceClassification),
(ConvBertConfig, ConvBertForSequenceClassification),
(LEDConfig, LEDForSequenceClassification),
@@ -614,6 +627,7 @@
MODEL_FOR_QUESTION_ANSWERING_MAPPING = OrderedDict(
[
# Model for Question Answering mapping
(BigBirdPegasusConfig, BigBirdPegasusForQuestionAnswering),
(BigBirdConfig, BigBirdForQuestionAnswering),
(ConvBertConfig, ConvBertForQuestionAnswering),
(LEDConfig, LEDForQuestionAnswering),
@@ -0,0 +1,70 @@
# flake8: noqa
# There's no way to ignore "F401 '...' imported but unused" warnings in this
# module, but to preserve other warnings. So, don't check this module at all.

# Copyright 2020 The HuggingFace Team. All rights reserved.
This conversation was marked as resolved by vasudevgupta7

This comment has been minimized.

@patil-suraj

patil-suraj May 6, 2021
Member Outdated

Suggested change
# Copyright 2020 The HuggingFace Team. All rights reserved.
# Copyright 2021 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING

from ...file_utils import _BaseLazyModule, is_torch_available


_import_structure = {
"configuration_bigbird_pegasus": ["BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP", "BigBirdPegasusConfig"],
}

if is_torch_available():
_import_structure["modeling_bigbird_pegasus"] = [
"BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST",
"BigBirdPegasusForCausalLM",
"BigBirdPegasusForConditionalGeneration",
"BigBirdPegasusForQuestionAnswering",
"BigBirdPegasusForSequenceClassification",
"BigBirdPegasusModel",
"BigBirdPegasusPreTrainedModel",
]


if TYPE_CHECKING:
from .configuration_bigbird_pegasus import BIGBIRD_PEGASUS_PRETRAINED_CONFIG_ARCHIVE_MAP, BigBirdPegasusConfig

if is_torch_available():
from .modeling_bigbird_pegasus import (
BIGBIRD_PEGASUS_PRETRAINED_MODEL_ARCHIVE_LIST,
BigBirdPegasusForCausalLM,
BigBirdPegasusForConditionalGeneration,
BigBirdPegasusForQuestionAnswering,
BigBirdPegasusForSequenceClassification,
BigBirdPegasusModel,
BigBirdPegasusPreTrainedModel,
)


else:
import importlib
import os
import sys

class _LazyModule(_BaseLazyModule):
"""
Module class that surfaces all objects but only performs associated imports when the objects are requested.
"""

__file__ = globals()["__file__"]
__path__ = [os.path.dirname(__file__)]

def _get_module(self, module_name: str):
return importlib.import_module("." + module_name, self.__name__)

sys.modules[__name__] = _LazyModule(__name__, _import_structure)
ProTip! Use n and p to navigate between commits in a pull request.