Python Web Script Upload to Google Drive

· 14 min read · Updated feb 2022 · Awarding Programming Interfaces

Google Drive enables you to store your files in the cloud, which you can access anytime and everywhere in the world. In this tutorial, you will learn how to list your Google drive files, search over them, download stored files, and even upload local files into your drive programmatically using Python.

Hither is the tabular array of contents:

  • Enable the Bulldoze API
  • List Files and Directories
  • Upload Files
  • Search for Files and Directories
  • Download Files

To get started, let'south install the required libraries for this tutorial:

          pip3 install google-api-python-client google-auth-httplib2 google-auth-oauthlib tabulate requests tqdm        

Enable the Drive API

Enabling Google Bulldoze API is very similar to other Google APIs such as Gmail API, YouTube API, or Google Search Engine API. First, you need to have a Google account with Google Drive enabled. Head to this page and click the "Enable the Drive API" button as shown below:

Enable the Drive API

A new window will popular up; choose your type of application. I volition stick with the "Desktop app" and then hitting the "Create" push button. After that, y'all'll see another window appear maxim you're all set:

Drive API is enabled

Download your credentials by clicking the "Download Client Configuration" button and and then "Done".

Finally, you need to put credentials.json that is downloaded into your working directories (i.eastward., where you execute the upcoming Python scripts).

Listing Files and Directories

Before nosotros practise anything, we need to authenticate our lawmaking to our Google account. The below role does that:

          import pickle import os from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.send.requests import Request from tabulate import tabulate  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly']  def get_gdrive_service():     creds = None     # The file token.pickle stores the user's access and refresh tokens, and is     # created automatically when the authorization flow completes for the starting time     # time.     if os.path.exists('token.pickle'):         with open up('token.pickle', 'rb') as token:             creds = pickle.load(token)     # If there are no (valid) credentials available, let the user log in.     if non creds or non creds.valid:         if creds and creds.expired and creds.refresh_token:             creds.refresh(Request())         else:             catamenia = InstalledAppFlow.from_client_secrets_file(                 'credentials.json', SCOPES)             creds = menstruum.run_local_server(port=0)         # Relieve the credentials for the adjacent run         with open up('token.pickle', 'wb') every bit token:             pickle.dump(creds, token)     # render Google Bulldoze API service     return build('drive', 'v3', credentials=creds)        

We've imported the necessary modules. The above function was grabbed from the Google Bulldoze quickstart folio. Information technology basically looks for token.pickle file to cosign with your Google account. If information technology didn't find it, it'd use credentials.json to prompt you for authentication in your browser. After that, it'll initiate the Google Bulldoze API service and return information technology.

Going to the main part, let'due south define a function that lists files in our bulldoze:

          def main():     """Shows basic usage of the Drive v3 API.     Prints the names and ids of the first five files the user has access to.     """     service = get_gdrive_service()     # Telephone call the Drive v3 API     results = service.files().list(         pageSize=5, fields="nextPageToken, files(id, proper noun, mimeType, size, parents, modifiedTime)").execute()     # go the results     items = results.become('files', [])     # listing all 20 files & folders     list_files(items)        

So we used service.files().list() office to render the starting time 5 files/folders the user has access to by specifying pageSize=5, we passed some useful fields to the fields parameter to get details virtually the listed files, such equally mimeType (type of file), size in bytes, parent directory IDs, and the last modified appointment time. Check this folio to see all other fields.

Notice we used list_files(items) part, we didn't ascertain this function yet. Since results are now a listing of dictionaries, it isn't that readable. We laissez passer items to this function to print them in homo-readable format:

          def list_files(items):     """given items returned by Google Drive API, prints them in a tabular way"""     if non items:         # empty drive         print('No files establish.')     else:         rows = []         for detail in items:             # go the File ID             id = item["id"]             # become the proper name of file             proper noun = item["name"]             try:                 # parent directory ID                 parents = particular["parents"]             except:                 # has no parrents                 parents = "N/A"             try:                 # get the size in dainty bytes format (KB, MB, etc.)                 size = get_size_format(int(item["size"]))             except:                 # not a file, may exist a binder                 size = "Northward/A"             # get the Google Bulldoze type of file             mime_type = detail["mimeType"]             # become last modified appointment time             modified_time = item["modifiedTime"]             # append everything to the list             rows.append((id, name, parents, size, mime_type, modified_time))         print("Files:")         # convert to a homo readable table         tabular array = tabulate(rows, headers=["ID", "Proper name", "Parents", "Size", "Type", "Modified Fourth dimension"])         # impress the tabular array         print(table)        

We converted that list of dictionaries items variable into a list of tuples rows variable, and so pass them to tabulate module we installed earlier to impress them in a nice format, let's call main() function:

          if __name__ == '__main__':     main()        

See my output:

          Files: ID                                 Name                            Parents                  Size      Type                          Modified Time ---------------------------------  ------------------------------  -----------------------  --------  ----------------------------  ------------------------ 1FaD2BVO_ppps2BFm463JzKM-gGcEdWVT  some_text.txt                   ['0AOEK-gp9UUuOUk9RVA']  31.00B    text/manifestly                    2020-05-15T13:22:xx.000Z 1vRRRh5OlXpb-vJtphPweCvoh7qYILJYi  google-bulldoze-512.png            ['0AOEK-gp9UUuOUk9RVA']  xv.62KB   paradigm/png                     2020-05-14T23:57:18.000Z 1wYY_5Fic8yt8KSy8nnQfjah9EfVRDoIE  bbc.zip                         ['0AOEK-gp9UUuOUk9RVA']  863.61KB  application/x-nix-compressed  2019-08-19T09:52:22.000Z 1FX-KwO6EpCMQg9wtsitQ-JUqYduTWZub  Nasdaq 100 Historical Information.csv  ['0AOEK-gp9UUuOUk9RVA']  363.10KB  text/csv                      2019-05-17T16:00:44.000Z 1shTHGozbqzzy9Rww9IAV5_CCzgPrO30R  my_python_code.py               ['0AOEK-gp9UUuOUk9RVA']  1.92MB    text/x-python                 2019-05-13T14:21:10.000Z        

These are the files in my Google Drive. Find the Size cavalcade are scaled in bytes; that'due south because we used get_size_format() function in list_files() function, hither is the code for it:

          def get_size_format(b, factor=1024, suffix="B"):     """     Calibration bytes to its proper byte format     e.one thousand:         1253656 => '1.20MB'         1253656678 => '1.17GB'     """     for unit in ["", "G", "M", "Yard", "T", "P", "Eastward", "Z"]:         if b < factor:             return f"{b:.2f}{unit of measurement}{suffix}"         b /= factor     return f"{b:.2f}Y{suffix}"        

The higher up function should exist defined before running the principal() method. Otherwise, it'll heighten an mistake. For convenience, check the full lawmaking.

Call up later on y'all run the script, you lot'll be prompted in your default browser to select your Google account and permit your application for the scopes you specified earlier, don't worry, this volition only happen the first time you lot run it, and so token.pickle will be saved and volition load authentication details from there instead.

Note: Sometimes, y'all'll come across a "This application is not validated" warning (since Google didn't verify your app) after choosing your Google business relationship. It's okay to get "Avant-garde" section and let the application to your account.

Upload Files

To upload files to our Google Drive, nosotros need to alter the SCOPES list nosotros specified earlier, we need to add the permission to add together files/folders:

          from __future__ import print_function import pickle import os.path from googleapiclient.discovery import build from google_auth_oauthlib.menstruum import InstalledAppFlow from google.auth.send.requests import Request from googleapiclient.http import MediaFileUpload  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly',           'https://www.googleapis.com/auth/drive.file']        

Dissimilar scope means dissimilar privileges, and you need to delete token.pickle file in your working directory and rerun the code to authenticate with the new scope.

Nosotros will use the aforementioned get_gdrive_service() function to authenticate our account, permit's make a function to create a folder and upload a sample file to it:

          def upload_files():     """     Creates a folder and upload a file to information technology     """     # authenticate account     service = get_gdrive_service()     # folder details nosotros want to make     folder_metadata = {         "proper noun": "TestFolder",         "mimeType": "application/vnd.google-apps.folder"     }     # create the folder     file = service.files().create(body=folder_metadata, fields="id").execute()     # get the folder id     folder_id = file.get("id")     print("Folder ID:", folder_id)     # upload a file text file     # first, define file metadata, such as the proper noun and the parent folder ID     file_metadata = {         "name": "test.txt",         "parents": [folder_id]     }     # upload     media = MediaFileUpload("test.txt", resumable=True)     file = service.files().create(body=file_metadata, media_body=media, fields='id').execute()     print("File created, id:", file.get("id"))        

We used service.files().create() method to create a new folder, we passed the folder_metadata dictionary that has the type and the proper noun of the folder nosotros want to create, we passed fields="id" to retrieve binder id so we can upload a file into that folder.

Next, we used MediaFileUpload course to upload the sample file and pass information technology to the same service.files().create() method, make sure yous have a examination file of your choice chosen test.txt, this time we specified the "parents" aspect in the metadata lexicon, we simply put the folder we just created. Let's run it:

          if __name__ == '__main__':     upload_files()        

Afterward I ran the code, a new binder was created in my Google Drive:

A folder created using Google Drive API in Python And indeed, subsequently I enter that folder, I run into the file nosotros just uploaded:

File Uploaded using Google Drive API in Python Nosotros used a text file for demonstration, but you tin upload any blazon of file you want. Check the full lawmaking of uploading files to Google Bulldoze.

Search for Files and Directories

Google Drive enables us to search for files and directories using the previously used listing() method just by passing the 'q' parameter, the below office takes the Drive API service and query and returns filtered items:

          def search(service, query):     # search for the file     result = []     page_token = None     while True:         response = service.files().list(q=query,                                         spaces="drive",                                         fields="nextPageToken, files(id, name, mimeType)",                                         pageToken=page_token).execute()         # iterate over filtered files         for file in response.get("files", []):             result.append((file["id"], file["proper noun"], file["mimeType"]))         page_token = response.get('nextPageToken', None)         if not page_token:             # no more files             break     return result        

Let'due south see how to use this function:

          def main():     # filter to text files     filetype = "text/plainly"     # authenticate Google Drive API     service = get_gdrive_service()     # search for files that has blazon of text/plain     search_result = search(service, query=f"mimeType='{filetype}'")     # convert to table to print well     table = tabulate(search_result, headers=["ID", "Name", "Type"])     impress(tabular array)        

So we're filtering text/plain files here by using "mimeType='text/evidently'" equally query parameter, if you want to filter by name instead, you can but use "name='filename.ext'" every bit query parameter. See Google Bulldoze API documentation for more detailed information.

Let's execute this:

          if __name__ == '__main__':     main()        

Output:

          ID                                 Name           Blazon ---------------------------------  -------------  ---------- 15gdpNEYnZ8cvi3PhRjNTvW8mdfix9ojV  exam.txt       text/manifestly 1FaE2BVO_rnps2BFm463JwPN-gGcDdWVT  some_text.txt  text/manifestly        

Check the full code hither.

Related: How to Utilise Gmail API in Python.

Download Files

To download files, we need kickoff to get the file nosotros want to download. Nosotros can either search for it using the previous code or manually get its bulldoze ID. In this section, nosotros gonna search for the file past proper noun and download it to our local disk:

          import pickle import bone import re import io from googleapiclient.discovery import build from google_auth_oauthlib.menstruation import InstalledAppFlow from google.auth.send.requests import Request from googleapiclient.http import MediaIoBaseDownload import requests from tqdm import tqdm  # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/bulldoze.metadata',           'https://www.googleapis.com/auth/drive',           'https://www.googleapis.com/auth/drive.file'           ]        

I've added 2 scopes here. That's because we need to create permission to make files shareable and downloadable. Here is the main function:

          def download():     service = get_gdrive_service()     # the name of the file you want to download from Google Bulldoze      filename = "bbc.naught"     # search for the file by name     search_result = search(service, query=f"proper name='{filename}'")     # get the GDrive ID of the file     file_id = search_result[0][0]     # brand information technology shareable     service.permissions().create(body={"office": "reader", "blazon": "anyone"}, fileId=file_id).execute()     # download file     download_file_from_google_drive(file_id, filename)        

You saw the first three lines in previous recipes. We simply authenticate with our Google business relationship and search for the desired file to download.

Later on that, we extract the file ID and create new permission that will let usa to download the file, and this is the same as creating a shareable link button in the Google Drive web interface.

Finally, we employ our defined download_file_from_google_drive() part to download the file, in that location you have it:

          def download_file_from_google_drive(id, destination):     def get_confirm_token(response):         for key, value in response.cookies.items():             if key.startswith('download_warning'):                 return value         return None      def save_response_content(response, destination):         CHUNK_SIZE = 32768         # get the file size from Content-length response header         file_size = int(response.headers.become("Content-Length", 0))         # extract Content disposition from response headers         content_disposition = response.headers.get("content-disposition")         # parse filename         filename = re.findall("filename=\"(.+)\"", content_disposition)[0]         print("[+] File size:", file_size)         print("[+] File name:", filename)         progress = tqdm(response.iter_content(CHUNK_SIZE), f"Downloading {filename}", total=file_size, unit="Byte", unit_scale=Truthful, unit_divisor=1024)         with open(destination, "wb") as f:             for chunk in progress:                 if chunk: # filter out proceed-alive new chunks                     f.write(chunk)                     # update the progress bar                     progress.update(len(chunk))         progress.close()      # base of operations URL for download     URL = "https://docs.google.com/uc?export=download"     # init a HTTP session     session = requests.Session()     # make a request     response = session.become(URL, params = {'id': id}, stream=True)     print("[+] Downloading", response.url)     # get confirmation token     token = get_confirm_token(response)     if token:         params = {'id': id, 'confirm':token}         response = session.get(URL, params=params, stream=True)     # download to disk     save_response_content(response, destination)                  

I've grabbed a function of the to a higher place code from downloading files tutorial; it is simply making a GET request to the target URL we constructed past passing the file ID as params in session.get() method.

I've used the tqdm library to print a progress bar to run across when it'll finish, which volition become handy for big files. Let's execute information technology:

          if __name__ == '__main__':     download()        

This will search for the bbc.nothing file, download it and salvage information technology in your working directory. Cheque the full code.

Conclusion

Alright, there you accept it. These are basically the core functionalities of Google Drive. Now you lot know how to practice them in Python without manual mouse clicks!

Retrieve, whenever you modify the SCOPES list, yous need to delete token.pickle file to authenticate to your account over again with the new scopes. See this page for further data, forth with a listing of scopes and their explanations.

Feel gratuitous to edit the code to have file names equally parameters to download or upload them. Go and try to make the script every bit dynamic every bit possible by introducing argparse module to make some useful scripts. Permit's see what yous build!

Below is a list of other Google APIs tutorials, if you desire to check them out:

  • How to Extract Google Trends Data in Python.
  • How to Use Google Custom Search Engine API in Python.
  • How to Extract YouTube Data using YouTube API in Python.
  • How to Use Gmail API in Python.

Happy Coding ♥

View Full Code


Read Besides


How to Use Google Custom Search Engine API in Python

How to Download and Upload Files in FTP Server using Python

How to Extract Google Trends Data in Python


Annotate console

folmarpreaddly.blogspot.com

Source: https://www.thepythoncode.com/article/using-google-drive--api-in-python

0 Response to "Python Web Script Upload to Google Drive"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel