1
0
mirror of https://github.com/snobu/destreamer.git synced 2026-01-30 03:42:16 +00:00

1 Commits

Author SHA1 Message Date
Luca Armaroli
f4a9934efd videoInfo fetching per videorather then in bulk
fixed side effects in main function of this change
2020-08-12 18:45:14 +01:00
19 changed files with 2053 additions and 2533 deletions

View File

@@ -7,24 +7,6 @@ assignees: ''
--- ---
<!-- ## PLEASE NEVER PASTE YOUR ACCESS TOKEN INTO A GITHUB ISSUE AS IT MAY CONTAIN PRIVATE INFORMATION
# BEFORE OPENING A NEW ISSUE CHECK THE EXISTING ONES AND RUN DESTREAMER WITH THE -v/--verbose flag and paste down below the output
# NEVER PASTE YOUR ACCESS TOKEN INTO A GITHUB ISSUE AS IT MAY CONTAIN PRIVATE INFORMATION
When you paste in output from destreamer, locate your access token (it looks like this: `Authorization: Bearer eyJ....<a lot more base64 encoded text>.....`) and redact it. When you paste in output from destreamer, locate your access token (it looks like this: `Authorization: Bearer eyJ....<a lot more base64 encoded text>.....`) and redact it.
# Please fill the form below to give us some more info.
-->
OS:
Launch command used:
<details>
<summary>Verbose log</summary>
```
PASTE VERBOSE LOG HERE
```
</details>

View File

@@ -6,9 +6,6 @@ on:
- 'README.md' - 'README.md'
branches: branches:
- master - master
pull_request:
branches:
- master
jobs: jobs:
build: build:
@@ -17,7 +14,7 @@ jobs:
strategy: strategy:
matrix: matrix:
node-version: [10.x, 12.x, 13.x] node-version: [8.x, 10.x, 12.x, 13.x]
steps: steps:
- uses: actions/checkout@v1 - uses: actions/checkout@v1

3
.gitignore vendored
View File

@@ -3,9 +3,6 @@
*.log *.log
*.js *.js
*.zip *.zip
*.xml
yarn.lock
.chrome_data .chrome_data
node_modules node_modules

View File

@@ -1,4 +1,3 @@
{ {
"eslint.enable": true, "eslint.enable": true
"typescript.tsdk": "node_modules\\typescript\\lib" }
}

View File

@@ -2,24 +2,21 @@
<img src="https://github.com/snobu/destreamer/workflows/Node%20CI/badge.svg" alt="CI build status" /> <img src="https://github.com/snobu/destreamer/workflows/Node%20CI/badge.svg" alt="CI build status" />
</a> </a>
**destreamer v3.0** is just around the corner. You can try out a pre-release today by cloning [this branch](https://github.com/snobu/destreamer/tree/aria2c_forRealNow).
![destreamer](assets/logo.png) ![destreamer](assets/logo.png)
_(Alternative artwork proposals are welcome! Submit one through an Issue.)_ _(Alternative artwork proposals are welcome! Submit one through an Issue.)_
# Saves Microsoft Stream videos for offline enjoyment # Saves Microsoft Stream videos for offline enjoyment
### v2 Release, codename _Hammer of Dawn<sup>TM</sup>_ ### v2.1 Release, codename _Hammer of Dawn<sup>TM</sup>_
This release would not have been possible without the code and time contributed by two distinguished developers: [@lukaarma](https://github.com/lukaarma) and [@kylon](https://github.com/kylon). Thank you! This release would not have been possible without the code and time contributed by two distinguished developers: [@lukaarma](https://github.com/lukaarma) and [@kylon](https://github.com/kylon). Thank you!
### Specialized versions ### Specialized vesions
- [Politecnico di Milano][polimi]: fork over at https://github.com/SamanFekri/destreamer - [Politecnico di Milano][polimi]: fork over at https://github.com/SamanFekri/destreamer
- [Università di Pisa][unipi]: fork over at https://github.com/Guray00/destreamer-unipi - [Università di Pisa][unipi]: fork over at https://github.com/Guray00/destreamer-unipi
- [Università della Calabria][unical]: fork over at https://github.com/peppelongo96/UnicalDown - [Università della Calabria][unical]: fork over at https://github.com/peppelongo96/UnicalDown
- [Università degli Studi di Parma][unipr]: fork over at https://github.com/vRuslan/destreamer-unipr
## What's new ## What's new
### v2.2 ### v2.2
@@ -38,10 +35,9 @@ Hopefully this doesn't break the end user agreement for Microsoft Stream. Since
## Prereqs ## Prereqs
- [**Node.js**][node]: You'll need Node.js version 8.0 or higher. A GitHub Action runs tests on all major Node versions on every commit. One caveat for Node 8, if you get a `Parse Error` with `code: HPE_HEADER_OVERFLOW` you're out of luck and you'll need to upgrade to Node 10+. PLEASE NOTE WE NO LONGER TEST BUILDS AGAINST NODE 8.x. YOU ARE ON YOUR OWN. - [**Node.js**][node]: You'll need Node.js version 8.0 or higher. A GitHub Action runs tests on all major Node versions on every commit. One caveat for Node 8, if you get a `Parse Error` with `code: HPE_HEADER_OVERFLOW` you're out of luck and you'll need to upgrade to Node 10+.
- **npm**: usually comes with Node.js, type `npm` in your terminal to check for its presence - **npm**: usually comes with Node.js, type `npm` in your terminal to check for its presence
- [**ffmpeg**][ffmpeg]: a recent version (year 2019 or above), in `$PATH` or in the same directory as this README file (project root). - [**ffmpeg**][ffmpeg]: a recent version (year 2019 or above), in `$PATH` or in the same directory as this README file (project root).
- [**aria2**][aria2]: aria2 is a utility for downloading files with multiple threads, fast.
- [**git**][git]: one or more npm dependencies require git. - [**git**][git]: one or more npm dependencies require git.
Destreamer takes a [honeybadger](https://www.youtube.com/watch?v=4r7wHMg5Yjg) approach towards the OS it's running on. We've successfully tested it on Windows, macOS and Linux. Destreamer takes a [honeybadger](https://www.youtube.com/watch?v=4r7wHMg5Yjg) approach towards the OS it's running on. We've successfully tested it on Windows, macOS and Linux.
@@ -55,29 +51,6 @@ Note that destreamer won't run in an elevated (Administrator/root) shell. Runnin
**WSL** (Windows Subsystem for Linux) is not supported as it can't easily pop up a browser window. It *may* work by installing an X Window server (like [Xming][xming]) and exporting the default display to it (`export DISPLAY=:0`) before running destreamer. See [this issue for more on WSL v1 and v2][wsl]. **WSL** (Windows Subsystem for Linux) is not supported as it can't easily pop up a browser window. It *may* work by installing an X Window server (like [Xming][xming]) and exporting the default display to it (`export DISPLAY=:0`) before running destreamer. See [this issue for more on WSL v1 and v2][wsl].
## Can i plug in my own browser?
Yes, yes you can. This may be useful if your main browser has some authentication plugins that are required for you to logon to your Microsoft Stream tenant.
To use your own browser for the authentication part, locate the following snippet in `src/destreamer.ts` and `src/TokenCache.ts`:
```typescript
const browser: puppeteer.Browser = await puppeteer.launch({
executablePath: getPuppeteerChromiumPath(),
// …
});
```
Navigate to `chrome://version` in the browser you want to plug in and copy executable path from there. Use double backslash for Windows.
Now, change `executablePath` to reflect the path to your browser and profile (i.e. to use Microsoft Edge on Windows):
```typescript
executablePath: 'C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe',
```
You can add `userDataDir` right after `executablePath` with the path to your browser profile (also shown in `chrome://version`) if you want that loaded as well.
Remember to rebuild (`npm run build`) every time you change this configuration.
## How to build ## How to build
To build destreamer clone this repository, install dependencies and run the build script - To build destreamer clone this repository, install dependencies and run the build script -
@@ -98,35 +71,31 @@ Options:
--help Show help [boolean] --help Show help [boolean]
--version Show version number [boolean] --version Show version number [boolean]
--username, -u The username used to log into Microsoft Stream (enabling this will fill in the email field for --username, -u The username used to log into Microsoft Stream (enabling this will fill in the email field for
you). [string] you) [string]
--videoUrls, -i List of urls to videos or Microsoft Stream groups. [array] --videoUrls, -i List of video urls [array]
--inputFile, -f Path to text file containing URLs and optionally outDirs. See the README for more on outDirs. --inputFile, -f Path to text file containing URLs and optionally outDirs. See the README for more on outDirs.
[string] [string]
--outputDirectory, -o The directory where destreamer will save your downloads. [string] [default: "videos"]
--outputTemplate, -t The template for the title. See the README for more info. --outputTemplate, -t The template for the title. See the README for more info.
[string] [default: "{title} - {publishDate} {uniqueId}"] [string] [default: "{title} - {publishDate} {uniqueId}"]
--keepLoginCookies, -k Let Chromium cache identity provider cookies so you can use "Remember me" during login. --outputDirectory, -o The directory where destreamer will save your downloads [string] [default: "videos"]
Must be used every subsequent time you launch Destreamer if you want to log in automatically. --keepLoginCookies, -k Let Chromium cache identity provider cookies so you can use "Remember me" during login
[boolean] [default: false] [boolean] [default: false]
--noExperiments, -x Do not attempt to render video thumbnails in the console. [boolean] [default: false] --noExperiments, -x Do not attempt to render video thumbnails in the console [boolean] [default: false]
--simulate, -s Disable video download and print metadata information to the console. --simulate, -s Disable video download and print metadata information to the console[boolean] [default: false]
--verbose, -v Print additional information to the console (use this before opening an issue on GitHub)
[boolean] [default: false] [boolean] [default: false]
--verbose, -v Print additional information to the console (use this before opening an issue on GitHub). --closedCaptions, --cc Check if closed captions are aviable and let the user choose which one to download (will not
[boolean] [default: false] ask if only one aviable) [boolean] [default: false]
--closedCaptions, --cc Check if closed captions are available and let the user choose which one to download (will not --noCleanup, --nc Do not delete the downloaded video file when an FFmpeg error occurs [boolean] [default: false]
ask if only one available). [boolean] [default: false]
--noCleanup, --nc Do not delete the downloaded video file when an FFmpeg error occurs.[boolean] [default: false]
--vcodec Re-encode video track. Specify FFmpeg codec (e.g. libx265) or set to "none" to disable video. --vcodec Re-encode video track. Specify FFmpeg codec (e.g. libx265) or set to "none" to disable video.
[string] [default: "copy"] [string] [default: "copy"]
--acodec Re-encode audio track. Specify FFmpeg codec (e.g. libopus) or set to "none" to disable audio. --acodec Re-encode audio track. Specify FFmpeg codec (e.g. libopus) or set to "none" to disable audio.
[string] [default: "copy"] [string] [default: "copy"]
--format Output container format (mkv, mp4, mov, anything that FFmpeg supports). --format Output container format (mkv, mp4, mov, anything that FFmpeg supports)
[string] [default: "mkv"] [string] [default: "mkv"]
--skip Skip download if file already exists. [boolean] [default: false] --skip Skip download if file already exists [boolean] [default: false]
``` ```
- both --videoUrls and --inputFile also accept Microsoft Teams Groups url so if your Organization placed the videos you are interested in a group you can copy the link and Destreamer will download all the videos it can inside it! A group url looks like this https://web.microsoftstream.com/group/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
- Passing `--username` is optional. It's there to make logging in faster (the username field will be populated automatically on the login form). - Passing `--username` is optional. It's there to make logging in faster (the username field will be populated automatically on the login form).
- You can use an absolute path for `-o` (output directory), for example `/mnt/videos`. - You can use an absolute path for `-o` (output directory), for example `/mnt/videos`.
@@ -171,13 +140,13 @@ These optional lines must start with white space(s).
Usage - Usage -
``` ```
https://web.microsoftstream.com/video/xxxxxxxx-aaaa-xxxx-xxxx-xxxxxxxxxxxx https://web.microsoftstream.com/video/xxxxxxxx-aaaa-xxxx-xxxx-xxxxxxxxxxxx
-dir="videos/lessons/week1" -dir=videos/lessons/week1
https://web.microsoftstream.com/video/xxxxxxxx-aaaa-xxxx-xxxx-xxxxxxxxxxxx https://web.microsoftstream.com/video/xxxxxxxx-aaaa-xxxx-xxxx-xxxxxxxxxxxx
-dir="videos/lessons/week2" -dir=videos/lessons/week2"
``` ```
### Title template ### Title template
The `-t` option allows user to specify a custom filename for the videos. The `-t` option allows users to input a template string for the output file names.
You can use one or more of the following magic sequence which will get substituted at runtime. The magic sequence must be surrounded by curly brackets like this: `{title} {publishDate}` You can use one or more of the following magic sequence which will get substituted at runtime. The magic sequence must be surrounded by curly brackets like this: `{title} {publishDate}`
@@ -189,20 +158,8 @@ You can use one or more of the following magic sequence which will get substitut
- `authorEmail`: E-mail of video publisher - `authorEmail`: E-mail of video publisher
- `uniqueId`: An _unique-enough_ ID generated from the video metadata - `uniqueId`: An _unique-enough_ ID generated from the video metadata
Examples - Example -
``` ```
Input:
-t 'This is an example'
Expected filename:
This is an example.mkv
Input:
-t 'This is an example by {author}'
Expected filename:
This is an example by lukaarma.mkv
Input: Input:
-t '{title} - {duration} - {publishDate} - {publishTime} - {author} - {authorEmail} - {uniqueId}' -t '{title} - {duration} - {publishDate} - {publishTime} - {author} - {authorEmail} - {uniqueId}'
@@ -220,15 +177,7 @@ iTerm2 on a Mac -
![screenshot](assets/screenshot-mac.png) ![screenshot](assets/screenshot-mac.png)
By default, downloads are saved under project root `Destreamer/videos/` ( Not the system media Videos folder ), unless specified by `-o` (output directory). By default, downloads are saved under `videos/` unless specified by `-o` (output directory).
## KNOWN BUGS
If you get a
```
[FATAL ERROR] Unknown error: exit code 4
````
when running destreamer, then make sure you're running a recent (post year 2019), stable version of **ffmpeg**.
## Contributing ## Contributing
@@ -240,7 +189,6 @@ Please open an [issue](https://github.com/snobu/destreamer/issues) and we'll loo
[ffmpeg]: https://www.ffmpeg.org/download.html [ffmpeg]: https://www.ffmpeg.org/download.html
[aria2]: https://github.com/aria2/aria2/releases
[xming]: https://sourceforge.net/projects/xming/ [xming]: https://sourceforge.net/projects/xming/
[node]: https://nodejs.org/en/download/ [node]: https://nodejs.org/en/download/
[git]: https://git-scm.com/downloads [git]: https://git-scm.com/downloads
@@ -248,4 +196,3 @@ Please open an [issue](https://github.com/snobu/destreamer/issues) and we'll loo
[polimi]: https://www.polimi.it [polimi]: https://www.polimi.it
[unipi]: https://www.unipi.it/ [unipi]: https://www.unipi.it/
[unical]: https://www.unical.it/portale/ [unical]: https://www.unical.it/portale/
[unipr]: https://www.unipr.it/

2727
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,51 +1,50 @@
{ {
"name": "destreamer", "name": "destreamer",
"repository": { "repository": {
"type": "git", "type": "git",
"url": "git://github.com/snobu/destreamer.git" "url": "git://github.com/snobu/destreamer.git"
}, },
"version": "2.1.0", "version": "2.1.0",
"description": "Save Microsoft Stream videos for offline enjoyment.", "description": "Save Microsoft Stream videos for offline enjoyment.",
"main": "build/src/destreamer.js", "main": "build/src/destreamer.js",
"bin": "build/src/destreamer.js", "bin": "build/src/destreamer.js",
"scripts": { "scripts": {
"build": "echo Transpiling TypeScript to JavaScript... && tsc && echo Destreamer was built successfully.", "build": "echo Transpiling TypeScript to JavaScript... && node node_modules/typescript/bin/tsc && echo Destreamer was built successfully.",
"watch": "tsc --watch", "test": "mocha build/test",
"test": "mocha build/test", "lint": "eslint src/*.ts"
"lint": "eslint src/*.ts" },
}, "keywords": [],
"keywords": [], "author": "snobu",
"author": "snobu", "license": "MIT",
"license": "MIT", "devDependencies": {
"devDependencies": { "@types/mocha": "^7.0.2",
"@types/mocha": "^8.0.4", "@types/puppeteer": "^1.20.4",
"@types/puppeteer": "^5.4.0", "@types/readline-sync": "^1.4.3",
"@types/readline-sync": "^1.4.3", "@types/tmp": "^0.1.0",
"@types/tmp": "^0.2.0", "@types/yargs": "^15.0.3",
"@types/yargs": "^15.0.11", "@typescript-eslint/eslint-plugin": "^2.25.0",
"@typescript-eslint/eslint-plugin": "^4.9.0", "@typescript-eslint/parser": "^2.25.0",
"@typescript-eslint/parser": "^4.9.0", "eslint": "^6.8.0",
"eslint": "^7.14.0", "mocha": "^7.1.1",
"mocha": "^8.2.1", "tmp": "^0.1.0"
"tmp": "^0.2.1" },
}, "dependencies": {
"dependencies": { "@tedconf/fessonia": "^2.1.0",
"@tedconf/fessonia": "^2.1.2", "@types/cli-progress": "^3.4.2",
"@types/cli-progress": "^3.8.0", "@types/jwt-decode": "^2.2.1",
"@types/jwt-decode": "^2.2.1", "axios": "^0.19.2",
"axios": "^0.21.2", "axios-retry": "^3.1.8",
"axios-retry": "^3.1.9", "cli-progress": "^3.7.0",
"cli-progress": "^3.8.2", "colors": "^1.4.0",
"colors": "^1.4.0", "is-elevated": "^3.0.0",
"is-elevated": "^3.0.0", "iso8601-duration": "^1.2.0",
"iso8601-duration": "^1.3.0", "jwt-decode": "^2.2.0",
"jwt-decode": "^3.1.2", "puppeteer": "2.1.1",
"puppeteer": "5.5.0", "readline-sync": "^1.4.10",
"readline-sync": "^1.4.10", "sanitize-filename": "^1.6.3",
"sanitize-filename": "^1.6.3", "terminal-image": "^1.0.1",
"terminal-image": "^1.2.1", "typescript": "^3.8.3",
"typescript": "^4.1.2", "winston": "^3.3.2",
"winston": "^3.3.3", "yargs": "^15.0.3"
"yargs": "^16.1.1" }
}
} }

View File

@@ -1,18 +1,16 @@
import { logger } from './Logger'; import { logger } from './Logger';
import { ShareSession, StreamSession, Video } from './Types'; import { Session } from './Types';
import { publishedDateToString, publishedTimeToString } from './VideoUtils';
import axios, { AxiosRequestConfig, AxiosResponse, AxiosInstance, AxiosError } from 'axios'; import axios, { AxiosRequestConfig, AxiosResponse, AxiosInstance, AxiosError } from 'axios';
import axiosRetry, { isNetworkOrIdempotentRequestError } from 'axios-retry'; import axiosRetry, { isNetworkOrIdempotentRequestError } from 'axios-retry';
// import fs from 'fs';
export class StreamApiClient { export class ApiClient {
private static instance: StreamApiClient; private static instance: ApiClient;
private axiosInstance?: AxiosInstance; private axiosInstance?: AxiosInstance;
private session?: StreamSession; private session?: Session;
private constructor(session?: StreamSession) { private constructor(session?: Session) {
this.session = session; this.session = session;
this.axiosInstance = axios.create({ this.axiosInstance = axios.create({
baseURL: session?.ApiGatewayUri, baseURL: session?.ApiGatewayUri,
@@ -36,9 +34,6 @@ export class StreamApiClient {
return true; return true;
} }
logger.warn(`Got HTTP code ${err?.response?.status ?? undefined}. Retrying request...`); logger.warn(`Got HTTP code ${err?.response?.status ?? undefined}. Retrying request...`);
logger.warn('Here is the error message: ');
console.dir(err.response?.data);
logger.warn('We called this URL: ' + err.response?.config.baseURL + err.response?.config.url);
const shouldRetry: boolean = retryCodes.includes(err?.response?.status ?? 0); const shouldRetry: boolean = retryCodes.includes(err?.response?.status ?? 0);
@@ -47,27 +42,12 @@ export class StreamApiClient {
}); });
} }
/** public static getInstance(session?: Session): ApiClient {
* Used to initialize/retrive the active ApiClient if (!ApiClient.instance) {
* ApiClient.instance = new ApiClient(session);
* @param session used if initializing
*/
public static getInstance(session?: StreamSession): StreamApiClient {
if (!StreamApiClient.instance) {
StreamApiClient.instance = new StreamApiClient(session);
} }
return StreamApiClient.instance; return ApiClient.instance;
}
public setSession(session: StreamSession): void {
if (!StreamApiClient.instance) {
logger.warn("Trying to update ApiCient session when it's not initialized!");
}
this.session = session;
return;
} }
/** /**
@@ -115,134 +95,3 @@ export class StreamApiClient {
}); });
} }
} }
export class ShareApiClient {
private axiosInstance: AxiosInstance;
private site: string;
public constructor(domain: string, site: string, session: ShareSession) {
this.axiosInstance = axios.create({
baseURL: domain,
// timeout: 7000,
headers: {
'User-Agent': 'destreamer/3.0 ALPHA',
'Cookie': `rtFa=${session.rtFa}; FedAuth=${session.FedAuth}`
}
});
this.site = site;
// FIXME: disabled because it was messing with the direct download check
// axiosRetry(this.axiosInstance, {
// // The following option is not working.
// // We should open an issue on the relative GitHub
// shouldResetTimeout: true,
// retries: 6,
// retryDelay: (retryCount: number) => {
// return retryCount * 2000;
// },
// retryCondition: (err: AxiosError) => {
// const retryCodes: Array<number> = [429, 500, 502, 503];
// if (isNetworkOrIdempotentRequestError(err)) {
// logger.warn(`${err}. Retrying request...`);
// return true;
// }
// logger.warn(`Got HTTP code ${err?.response?.status ?? undefined}.`);
// logger.warn('Here is the error message: ');
// console.dir(err.response?.data);
// logger.warn('We called this URL: ' + err.response?.config.baseURL + err.response?.config.url);
// const shouldRetry: boolean = retryCodes.includes(err?.response?.status ?? 0);
// return shouldRetry;
// }
// });
}
public async getVideoInfo(filePath: string, outPath: string): Promise<Video> {
let playbackUrl: string;
// TODO: Ripped this straigth from chromium inspector. Don't know don't care what it is right now. Check later
const payload = {
parameters: {
__metadata: {
type: 'SP.RenderListDataParameters'
},
ViewXml: `<View Scope="RecursiveAll"><Query><Where><Eq><FieldRef Name="FileRef" /><Value Type="Text"><![CDATA[${filePath}]]></Value></Eq></Where></Query><RowLimit Paged="TRUE">1</RowLimit></View>`,
RenderOptions: 12295,
AddRequiredFields: true
}
};
const url = `${this.site}/_api/web/GetListUsingPath(DecodedUrl=@a1)/RenderListDataAsStream?@a1='${encodeURIComponent(filePath)}'`;
logger.verbose(`Requesting video info for '${url}'`);
const info = await this.axiosInstance.post(url, payload, {
headers: {
'Content-Type': 'application/json;odata=verbose'
}
}).then(res => res.data);
// fs.writeFileSync('info.json', JSON.stringify(info, null, 4));
// FIXME: very bad but usefull in alpha stage to check for edge cases
if (info.ListData.Row.length !== 1) {
logger.error('More than 1 row in SharePoint video info', { fatal: true });
process.exit(1000);
}
const direct = await this.canDirectDownload(filePath);
const b64VideoMetadata = JSON.parse(
info.ListData.Row[0].MediaServiceFastMetadata
).video.altManifestMetadata;
const durationSeconds = Math.ceil(
(JSON.parse(
Buffer.from(b64VideoMetadata, 'base64').toString()
).Duration100Nano) / 10 / 1000 / 1000
);
if (direct) {
playbackUrl = this.axiosInstance.defaults.baseURL + filePath;
// logger.verbose(playbackUrl);
}
else {
playbackUrl = info.ListSchema['.videoManifestUrl'];
playbackUrl = playbackUrl.replace('{.mediaBaseUrl}', info.ListSchema['.mediaBaseUrl']);
// the only filetype works I found
playbackUrl = playbackUrl.replace('{.fileType}', 'mp4');
playbackUrl = playbackUrl.replace('{.callerStack}', info.ListSchema['.callerStack']);
playbackUrl = playbackUrl.replace('{.spItemUrl}', info.ListData.Row[0]['.spItemUrl']);
playbackUrl = playbackUrl.replace('{.driveAccessToken}', info.ListSchema['.driveAccessToken']);
playbackUrl += '&part=index&format=dash';
}
return {
direct,
title: filePath.split('/').pop() ?? 'video.mp4',
duration: publishedTimeToString(durationSeconds),
publishDate: publishedDateToString(info.ListData.Row[0]['Modified.']),
publishTime: publishedTimeToString(info.ListData.Row[0]['Modified.']),
author: info.ListData.Row[0]['Author.title'],
authorEmail: info.ListData.Row[0]['Author.email'],
uniqueId: info.ListData.Row[0].GUID.substring(1, 9),
outPath,
playbackUrl,
totalChunks: durationSeconds
};
}
private async canDirectDownload(filePath: string): Promise<boolean> {
logger.verbose(`Checking direct download for '${filePath}'`);
return this.axiosInstance.head(
filePath, { maxRedirects: 0 }
).then(
res => (res.status === 200)
).catch(
() => false
);
}
}

View File

@@ -1,23 +1,24 @@
import { CLI_ERROR } from './Errors'; import { CLI_ERROR, ERROR_CODE } from './Errors';
import { makeOutDir } from './Utils'; import { checkOutDir } from './Utils';
import { logger } from './Logger'; import { logger } from './Logger';
import { templateElements } from './Types'; import { templateElements } from './Types';
import fs from 'fs'; import fs from 'fs';
import readlineSync from 'readline-sync';
import sanitize from 'sanitize-filename'; import sanitize from 'sanitize-filename';
import yargs from 'yargs'; import yargs from 'yargs';
export const argv = yargs.options({ export const argv: any = yargs.options({
username: { username: {
alias: 'u', alias: 'u',
type: 'string', type: 'string',
describe: 'The username used to log into Microsoft Stream (enabling this will fill in the email field for you).', describe: 'The username used to log into Microsoft Stream (enabling this will fill in the email field for you)',
demandOption: false demandOption: false
}, },
videoUrls: { videoUrls: {
alias: 'i', alias: 'i',
describe: 'List of urls to videos or Microsoft Stream groups.', describe: 'List of video urls',
type: 'array', type: 'array',
demandOption: false demandOption: false
}, },
@@ -29,7 +30,7 @@ export const argv = yargs.options({
}, },
outputDirectory: { outputDirectory: {
alias: 'o', alias: 'o',
describe: 'The directory where destreamer will save your downloads.', describe: 'The directory where destreamer will save your downloads',
type: 'string', type: 'string',
default: 'videos', default: 'videos',
demandOption: false demandOption: false
@@ -43,43 +44,42 @@ export const argv = yargs.options({
}, },
keepLoginCookies: { keepLoginCookies: {
alias: 'k', alias: 'k',
describe: 'Let Chromium cache identity provider cookies so you can use "Remember me" during login.\n' + describe: 'Let Chromium cache identity provider cookies so you can use "Remember me" during login',
'Must be used every subsequent time you launch Destreamer if you want to log in automatically.',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
}, },
noExperiments: { noExperiments: {
alias: 'x', alias: 'x',
describe: 'Do not attempt to render video thumbnails in the console.', describe: 'Do not attempt to render video thumbnails in the console',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
}, },
simulate: { simulate: {
alias: 's', alias: 's',
describe: 'Disable video download and print metadata information to the console.', describe: 'Disable video download and print metadata information to the console',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
}, },
verbose: { verbose: {
alias: 'v', alias: 'v',
describe: 'Print additional information to the console (use this before opening an issue on GitHub).', describe: 'Print additional information to the console (use this before opening an issue on GitHub)',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
}, },
closedCaptions: { closedCaptions: {
alias: 'cc', alias: 'cc',
describe: 'Check if closed captions are available and let the user choose which one to download (will not ask if only one available).', describe: 'Check if closed captions are aviable and let the user choose which one to download (will not ask if only one aviable)',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
}, },
noCleanup: { noCleanup: {
alias: 'nc', alias: 'nc',
describe: 'Do not delete the downloaded video file when an FFmpeg error occurs.', describe: 'Do not delete the downloaded video file when an FFmpeg error occurs',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
@@ -97,13 +97,13 @@ export const argv = yargs.options({
demandOption: false demandOption: false
}, },
format: { format: {
describe: 'Output container format (mkv, mp4, mov, anything that FFmpeg supports).', describe: 'Output container format (mkv, mp4, mov, anything that FFmpeg supports)',
type: 'string', type: 'string',
default: 'mkv', default: 'mkv',
demandOption: false demandOption: false
}, },
skip: { skip: {
describe: 'Skip download if file already exists.', describe: 'Skip download if file already exists',
type: 'boolean', type: 'boolean',
default: false, default: false,
demandOption: false demandOption: false
@@ -113,7 +113,7 @@ export const argv = yargs.options({
.check(() => noArguments()) .check(() => noArguments())
.check((argv: any) => checkInputConflicts(argv.videoUrls, argv.inputFile)) .check((argv: any) => checkInputConflicts(argv.videoUrls, argv.inputFile))
.check((argv: any) => { .check((argv: any) => {
if (makeOutDir(argv.outputDirectory)) { if (checkOutDir(argv.outputDirectory)) {
return true; return true;
} }
else { else {
@@ -173,8 +173,9 @@ function checkInputConflicts(videoUrls: Array<string | number> | undefined,
function isOutputTemplateValid(argv: any): boolean { function isOutputTemplateValid(argv: any): boolean {
let finalTemplate: string = argv.outputTemplate;
const elementRegEx = RegExp(/{(.*?)}/g); const elementRegEx = RegExp(/{(.*?)}/g);
let match = elementRegEx.exec(argv.outputTemplate); let match = elementRegEx.exec(finalTemplate);
// if no template elements this fails // if no template elements this fails
if (match) { if (match) {
@@ -182,18 +183,34 @@ function isOutputTemplateValid(argv: any): boolean {
while (match) { while (match) {
if (!templateElements.includes(match[1])) { if (!templateElements.includes(match[1])) {
logger.error( logger.error(
`'${match[0]}' is not available as a template element \n` + `'${match[0]}' is not aviable as a template element \n` +
`Available templates elements: '${templateElements.join("', '")}' \n`, `Aviable templates elements: '${templateElements.join("', '")}' \n`,
{ fatal: true } { fatal: true }
); );
process.exit(1); process.exit(1);
} }
match = elementRegEx.exec(argv.outputTemplate); match = elementRegEx.exec(finalTemplate);
} }
} }
// bad template from user, switching to default
else {
logger.warn('Empty output template provided, using default one \n');
finalTemplate = '{title} - {publishDate} {uniqueId}';
}
argv.outputTemplate = sanitize(argv.outputTemplate.trim()); argv.outputTemplate = sanitize(finalTemplate.trim());
return true; return true;
} }
export function promptUser(choices: Array<string>): number {
let index: number = readlineSync.keyInSelect(choices, 'Which resolution/format do you prefer?');
if (index === -1) {
process.exit(ERROR_CODE.CANCELLED_USER_INPUT);
}
return index;
}

View File

@@ -1,335 +0,0 @@
import { ShareApiClient, StreamApiClient } from './ApiClient';
import { argv } from './CommandLineParser';
import { ERROR_CODE } from './Errors';
import { logger } from './Logger';
import { doShareLogin, doStreamLogin } from './LoginModules';
import { drawThumbnail } from './Thumbnail';
import { refreshSession, TokenCache } from './TokenCache';
import { Video, VideoUrl } from './Types';
import { ffmpegTimemarkToChunk } from './Utils';
import { createUniquePath, getStreamInfo } from './VideoUtils';
import cliProgress from 'cli-progress';
import fs from 'fs';
import { execSync } from 'child_process';
import path from 'path';
const { FFmpegCommand, FFmpegInput, FFmpegOutput } = require('@tedconf/fessonia')();
const tokenCache: TokenCache = new TokenCache();
export async function downloadStreamVideo(videoUrls: Array<VideoUrl>): Promise<void> {
logger.info('Downloading Microsoft Stream videos...');
let session = tokenCache.Read() ?? await doStreamLogin('https://web.microsoftstream.com/', tokenCache, argv.username);
logger.verbose(
'Session and API info \n' +
'\t API Gateway URL: '.cyan + session.ApiGatewayUri + '\n' +
'\t API Gateway version: '.cyan + session.ApiGatewayVersion + '\n'
);
logger.info('Fetching videos info... \n');
const videos: Array<Video> = createUniquePath(
await getStreamInfo(videoUrls, session, argv.closedCaptions),
argv.outputTemplate, argv.format, argv.skip
);
if (argv.simulate) {
videos.forEach((video: Video) => {
logger.info(
'\nTitle: '.green + video.title +
'\nOutPath: '.green + video.outPath +
'\nPublished Date: '.green + video.publishDate +
'\nPlayback URL: '.green + video.playbackUrl +
((video.captionsUrl) ? ('\nCC URL: '.green + video.captionsUrl) : '')
);
});
return;
}
for (const [index, video] of videos.entries()) {
if (argv.skip && fs.existsSync(video.outPath)) {
logger.info(`File already exists, skipping: ${video.outPath} \n`);
continue;
}
if (argv.keepLoginCookies && index !== 0) {
logger.info('Trying to refresh token...');
session = await refreshSession('https://web.microsoftstream.com/video/' + video.guid);
StreamApiClient.getInstance().setSession(session);
}
const pbar: cliProgress.SingleBar = new cliProgress.SingleBar({
barCompleteChar: '\u2588',
barIncompleteChar: '\u2591',
format: 'progress [{bar}] {percentage}% {speed} {eta_formatted}',
// process.stdout.columns may return undefined in some terminals (Cygwin/MSYS)
barsize: Math.floor((process.stdout.columns || 30) / 3),
stopOnComplete: true,
hideCursor: true,
});
logger.info(`\nDownloading Video: ${video.title} \n`);
logger.verbose('Extra video info \n' +
'\t Video m3u8 playlist URL: '.cyan + video.playbackUrl + '\n' +
'\t Video tumbnail URL: '.cyan + video.posterImageUrl + '\n' +
'\t Video subtitle URL (may not exist): '.cyan + video.captionsUrl + '\n' +
'\t Video total chunks: '.cyan + video.totalChunks + '\n');
logger.info('Spawning ffmpeg with access token and HLS URL. This may take a few seconds...\n\n');
if (!process.stdout.columns) {
logger.warn(
'Unable to get number of columns from terminal.\n' +
'This happens sometimes in Cygwin/MSYS.\n' +
'No progress bar can be rendered, however the download process should not be affected.\n\n' +
'Please use PowerShell or cmd.exe to run destreamer on Windows.'
);
}
const headers: string = 'Authorization: Bearer ' + session.AccessToken;
if (!argv.noExperiments) {
if (video.posterImageUrl) {
await drawThumbnail(video.posterImageUrl, session);
}
}
const ffmpegInpt: any = new FFmpegInput(video.playbackUrl, new Map([
['headers', headers]
]));
const ffmpegOutput: any = new FFmpegOutput(video.outPath, new Map([
argv.acodec === 'none' ? ['an', null] : ['c:a', argv.acodec],
argv.vcodec === 'none' ? ['vn', null] : ['c:v', argv.vcodec],
['n', null]
]));
const ffmpegCmd: any = new FFmpegCommand();
const cleanupFn: () => void = () => {
pbar.stop();
if (argv.noCleanup) {
return;
}
try {
fs.unlinkSync(video.outPath);
}
catch (e) {
// Future handling of an error (maybe)
}
};
pbar.start(video.totalChunks, 0, {
speed: '0'
});
// prepare ffmpeg command line
ffmpegCmd.addInput(ffmpegInpt);
ffmpegCmd.addOutput(ffmpegOutput);
if (argv.closedCaptions && video.captionsUrl) {
const captionsInpt: any = new FFmpegInput(video.captionsUrl, new Map([
['headers', headers]
]));
ffmpegCmd.addInput(captionsInpt);
}
ffmpegCmd.on('update', async (data: any) => {
const currentChunks: number = ffmpegTimemarkToChunk(data.out_time);
pbar.update(currentChunks, {
speed: data.bitrate
});
// Graceful fallback in case we can't get columns (Cygwin/MSYS)
if (!process.stdout.columns) {
process.stdout.write(`--- Speed: ${data.bitrate}, Cursor: ${data.out_time}\r`);
}
});
process.on('SIGINT', cleanupFn);
// let the magic begin...
await new Promise((resolve: any) => {
ffmpegCmd.on('error', (error: any) => {
cleanupFn();
logger.error(`FFmpeg returned an error: ${error.message}`);
process.exit(ERROR_CODE.UNK_FFMPEG_ERROR);
});
ffmpegCmd.on('success', () => {
pbar.update(video.totalChunks); // set progress bar to 100%
logger.info(`\nDownload finished: ${video.outPath} \n`);
resolve();
});
ffmpegCmd.spawn();
});
process.removeListener('SIGINT', cleanupFn);
}
}
// TODO: complete overhaul of this function
export async function downloadShareVideo(videoUrls: Array<VideoUrl>): Promise<void> {
const shareUrlRegex = new RegExp(/(?<domain>https:\/\/.+\.sharepoint\.com).*?(?<baseSite>\/(?:teams|sites|personal)\/.*?)(?:(?<filename>\/.*\.mp4)|\/.*id=(?<paramFilename>.*mp4))/);
logger.info('Downloading SharePoint videos...\n\n');
// FIXME: this may change we need a smart login system if a request fails
const session = await doShareLogin(videoUrls[0].url, argv.username);
for (const videoUrl of videoUrls) {
const match = shareUrlRegex.exec(videoUrl.url);
if (!match) {
logger.error(`Invalid url '${videoUrl.url}', skipping...`);
continue;
}
const shareDomain = match.groups!.domain;
const shareSite = match.groups!.baseSite;
const shareFilepath = decodeURIComponent(match.groups?.filename ? (shareSite + match.groups.filename) : match.groups!.paramFilename);
// FIXME: hardcoded video.mp4
const title = shareFilepath.split('/').pop()?.split('.')[0] ?? 'video';
const apiClient = new ShareApiClient(shareDomain, shareSite, session);
const video = await apiClient.getVideoInfo(shareFilepath, videoUrl.outDir);
createUniquePath(video, title, argv.format, argv.skip);
if (argv.simulate) {
if (argv.verbose) {
console.dir(video);
}
else {
logger.info(
'\nTitle: '.green + video.title +
'\nOutPath: '.green + video.outPath +
'\nPublished Date: '.green + video.publishDate +
'\nPlayback URL: '.green + video.playbackUrl
);
}
continue;
}
if (video.direct) {
const headers = `Cookie: rtFa=${session.rtFa}; FedAuth=${session.FedAuth}`;
// FIXME: unstable and bad all-around
try {
execSync(
'aria2c --max-connection-per-server 8 --console-log-level warn ' +
`--header "${headers}" --dir "${path.dirname(video.outPath)}" --out "${path.basename(video.outPath)}" "${shareDomain + shareFilepath}"`,
{ stdio: 'inherit' }
);
}
catch (error: any) {
logger.error(`${error.status} \n\n${error.message} \n\n${error.stdout.toString()} \n\n${error.stderr.toString()}`);
}
}
else {
// FIXME: just a copy-paste, should move to separate function
const pbar: cliProgress.SingleBar = new cliProgress.SingleBar({
barCompleteChar: '\u2588',
barIncompleteChar: '\u2591',
format: 'progress [{bar}] {percentage}% {speed} {eta_formatted}',
// process.stdout.columns may return undefined in some terminals (Cygwin/MSYS)
barsize: Math.floor((process.stdout.columns || 30) / 3),
stopOnComplete: true,
hideCursor: true,
});
logger.info(`\nDownloading Video: ${video.title} \n`);
logger.verbose('Extra video info \n' +
'\t Video m3u8 playlist URL: '.cyan + video.playbackUrl + '\n' +
'\t Video tumbnail URL: '.cyan + video.posterImageUrl + '\n' +
'\t Video subtitle URL (may not exist): '.cyan + video.captionsUrl + '\n' +
'\t Video total chunks: '.cyan + video.totalChunks + '\n');
logger.info('Spawning ffmpeg with access token and HLS URL. This may take a few seconds...\n\n');
if (!process.stdout.columns) {
logger.warn(
'Unable to get number of columns from terminal.\n' +
'This happens sometimes in Cygwin/MSYS.\n' +
'No progress bar can be rendered, however the download process should not be affected.\n\n' +
'Please use PowerShell or cmd.exe to run destreamer on Windows.'
);
}
const ffmpegInpt: any = new FFmpegInput(video.playbackUrl);
const ffmpegOutput: any = new FFmpegOutput(video.outPath, new Map([
argv.acodec === 'none' ? ['an', null] : ['c:a', argv.acodec],
argv.vcodec === 'none' ? ['vn', null] : ['c:v', argv.vcodec],
['n', null]
]));
const ffmpegCmd: any = new FFmpegCommand();
const cleanupFn: () => void = () => {
pbar.stop();
if (argv.noCleanup) {
return;
}
try {
fs.unlinkSync(video.outPath);
}
catch (e) {
// Future handling of an error (maybe)
}
};
pbar.start(video.totalChunks, 0, {
speed: '0'
});
// prepare ffmpeg command line
ffmpegCmd.addInput(ffmpegInpt);
ffmpegCmd.addOutput(ffmpegOutput);
ffmpegCmd.on('update', async (data: any) => {
const currentChunks: number = ffmpegTimemarkToChunk(data.out_time);
pbar.update(currentChunks, {
speed: data.bitrate
});
// Graceful fallback in case we can't get columns (Cygwin/MSYS)
if (!process.stdout.columns) {
process.stdout.write(`--- Speed: ${data.bitrate}, Cursor: ${data.out_time}\r`);
}
});
process.on('SIGINT', cleanupFn);
// let the magic begin...
await new Promise((resolve: any) => {
ffmpegCmd.on('error', (error: any) => {
cleanupFn();
logger.error(`FFmpeg returned an error: ${error.message}`);
process.exit(ERROR_CODE.UNK_FFMPEG_ERROR);
});
ffmpegCmd.on('success', () => {
pbar.update(video.totalChunks); // set progress bar to 100%
logger.info(`\nDownload finished: ${video.outPath} \n`);
resolve();
});
ffmpegCmd.spawn();
});
process.removeListener('SIGINT', cleanupFn);
// logger.error('TODO: manifest download');
// continue;
}
}
}

View File

@@ -1,55 +1,47 @@
export const enum ERROR_CODE { export const enum ERROR_CODE {
UNHANDLED_ERROR = 200, UNHANDLED_ERROR,
ELEVATED_SHELL, ELEVATED_SHELL,
CANCELLED_USER_INPUT, CANCELLED_USER_INPUT,
MISSING_FFMPEG, MISSING_FFMPEG,
MISSING_ARIA2,
OUTDATED_FFMPEG,
UNK_FFMPEG_ERROR, UNK_FFMPEG_ERROR,
INVALID_VIDEO_GUID, INVALID_VIDEO_GUID,
NO_SESSION_INFO NO_SESSION_INFO
} }
export const errors: { [key: number]: string } = { export const errors: {[key: number]: string} = {
[ERROR_CODE.UNHANDLED_ERROR]: 'Unhandled error!\n' + [ERROR_CODE.UNHANDLED_ERROR]: 'Unhandled error!\n' +
'Timeout or fatal error, please check your downloads directory and try again', 'Timeout or fatal error, please check your downloads directory and try again',
[ERROR_CODE.ELEVATED_SHELL]: 'Destreamer cannot run in an elevated (Administrator/root) shell.\n' + [ERROR_CODE.ELEVATED_SHELL]: 'Destreamer cannot run in an elevated (Administrator/root) shell.\n' +
'Please run in a regular, non-elevated window.', 'Please run in a regular, non-elevated window.',
[ERROR_CODE.CANCELLED_USER_INPUT]: 'Input was cancelled by user', [ERROR_CODE.CANCELLED_USER_INPUT]: 'Input was cancelled by user',
[ERROR_CODE.MISSING_FFMPEG]: 'FFmpeg is missing!\n' + [ERROR_CODE.MISSING_FFMPEG]: 'FFmpeg is missing!\n' +
'Destreamer requires a fairly recent release of FFmpeg to download videos', 'Destreamer requires a fairly recent release of FFmpeg to download videos',
[ERROR_CODE.MISSING_ARIA2]: 'Aria2 is missing!\n' + [ERROR_CODE.UNK_FFMPEG_ERROR]: 'Unknown FFmpeg error',
'Destreamer requires a fairly recent release of Aria2 to download videos',
[ERROR_CODE.OUTDATED_FFMPEG]: 'The FFmpeg version currently installed is too old!\n' + [ERROR_CODE.INVALID_VIDEO_GUID]: 'Unable to get video GUID from URL',
'Destreamer requires a fairly recent release of FFmpeg to download videos',
[ERROR_CODE.UNK_FFMPEG_ERROR]: 'Unknown FFmpeg error', [ERROR_CODE.NO_SESSION_INFO]: 'Could not evaluate sessionInfo on the page'
[ERROR_CODE.INVALID_VIDEO_GUID]: 'Unable to get video GUID from URL',
[ERROR_CODE.NO_SESSION_INFO]: 'Could not evaluate sessionInfo on the page'
}; };
export const enum CLI_ERROR { export const enum CLI_ERROR {
MISSING_INPUT_ARG = 'You must specify a URLs source. \n' + MISSING_INPUT_ARG = 'You must specify a URLs source. \n' +
'Valid options are -i for one or more URLs separated by space or -f for input file. \n', 'Valid options are -i for one or more URLs separated by space or -f for input file. \n',
INPUT_ARG_CONFLICT = 'Too many URLs sources specified! \n' + INPUT_ARG_CONFLICT = 'Too many URLs sources specified! \n' +
'Please specify a single source, either -i or -f \n', 'Please specify a single source, either -i or -f \n',
INPUTFILE_WRONG_EXTENSION = 'The specified inputFile has the wrong extension \n' + INPUTFILE_WRONG_EXTENSION = 'The specified inputFile has the wrong extension \n' +
'Please make sure to use path/to/filename.txt when useing the -f option \n', 'Please make sure to use path/to/filename.txt when useing the -f option \n',
INPUTFILE_NOT_FOUND = 'The specified inputFile does not exists \n' + INPUTFILE_NOT_FOUND = 'The specified inputFile does not exists \n'+
'Please check the filename and the path you provided \n', 'Please check the filename and the path you provided \n',
INVALID_OUTDIR = 'Could not create the default/specified output directory \n' + INVALID_OUTDIR = 'Could not create the default/specified output directory \n' +
'Please check directory and permissions and try again. \n' 'Please check directory and permissions and try again. \n'
} }

View File

@@ -1,176 +0,0 @@
import { logger } from './Logger';
import puppeteer from 'puppeteer';
import { getPuppeteerChromiumPath } from './PuppeteerHelper';
import { chromeCacheFolder } from './destreamer';
import { argv } from './CommandLineParser';
import { ShareSession, StreamSession } from './Types';
import { ERROR_CODE } from './Errors';
import { TokenCache } from './TokenCache';
export async function doStreamLogin(url: string, tokenCache: TokenCache, username?: string): Promise<StreamSession> {
logger.info('Launching headless Chrome to perform the OpenID Connect dance...');
const browser: puppeteer.Browser = await puppeteer.launch({
executablePath: getPuppeteerChromiumPath(),
headless: false,
userDataDir: (argv.keepLoginCookies) ? chromeCacheFolder : undefined,
defaultViewport: null,
args: [
'--disable-dev-shm-usage',
'--fast-start',
'--no-sandbox'
]
});
// try-finally because we were leaving zombie processes if there was an error
try {
const page: puppeteer.Page = (await browser.pages())[0];
logger.info('Navigating to login page...');
await page.goto(url, { waitUntil: 'load' });
try {
if (username) {
await page.waitForSelector('input[type="email"]', { timeout: 3000 });
await page.keyboard.type(username);
await page.click('input[type="submit"]');
}
else {
/* If a username was not provided we let the user take actions that
lead up to the video page. */
}
}
catch (e) {
/* If there is no email input selector we aren't in the login module,
we are probably using the cache to aid the login.
It could finish the login on its own if the user said 'yes' when asked to
remember the credentials or it could still prompt the user for a password */
}
await browser.waitForTarget((target: puppeteer.Target) => target.url().endsWith('microsoftstream.com/'), { timeout: 150000 });
logger.info('We are logged in.');
let session: StreamSession | null = null;
let tries = 1;
while (!session) {
try {
let sessionInfo: any;
session = await page.evaluate(
() => {
return {
AccessToken: sessionInfo.AccessToken,
ApiGatewayUri: sessionInfo.ApiGatewayUri,
ApiGatewayVersion: sessionInfo.ApiGatewayVersion
};
}
);
}
catch (error) {
if (tries > 5) {
process.exit(ERROR_CODE.NO_SESSION_INFO);
}
session = null;
tries++;
await page.waitForTimeout(3000);
}
}
tokenCache.Write(session);
logger.info('Wrote access token to token cache.');
logger.info("At this point Chromium's job is done, shutting it down...\n");
return session;
}
finally {
await browser.close();
}
}
export async function doShareLogin(url: string, username?: string): Promise<ShareSession> {
logger.info('Launching headless Chrome to perform the OpenID Connect dance...');
let session: ShareSession | null = null;
const hostname = new URL(url).host;
const browser: puppeteer.Browser = await puppeteer.launch({
executablePath: getPuppeteerChromiumPath(),
headless: false,
devtools: argv.verbose,
userDataDir: (argv.keepLoginCookies) ? chromeCacheFolder : undefined,
defaultViewport: null,
args: [
'--disable-dev-shm-usage',
'--fast-start',
'--no-sandbox'
]
});
// try-finally because we were leaving zombie processes if there was an error
try {
const page: puppeteer.Page = (await browser.pages())[0];
logger.info('Navigating to login page...');
await page.goto(url, { waitUntil: 'load' });
try {
if (username) {
await page.waitForSelector('input[type="email"]', { timeout: 3000 });
await page.keyboard.type(username);
await page.click('input[type="submit"]');
}
else {
/* If a username was not provided we let the user take actions that
lead up to the video page. */
}
}
catch (e) {
/* If there is no email input selector we aren't in the login module,
we are probably using the cache to aid the login.
It could finish the login on its own if the user said 'yes' when asked to
remember the credentials or it could still prompt the user for a password */
}
logger.info('Waiting for target!');
await browser.waitForTarget((target: puppeteer.Target) => target.url().startsWith(`https://${hostname}`), { timeout: 150000 });
logger.info('We are logged in.');
let tries = 1;
while (!session) {
const cookieJar = (await page.cookies()).filter(
biscuit => biscuit.name == 'rtFa' || biscuit.name == 'FedAuth'
);
if (cookieJar.length != 2) {
if (tries > 5) {
process.exit(ERROR_CODE.NO_SESSION_INFO);
}
await page.waitForTimeout(1000 * tries++);
continue;
}
session = {
rtFa: cookieJar.find(biscuit => biscuit.name == 'rtFa')!.value,
FedAuth: cookieJar.find(biscuit => biscuit.name == 'FedAuth')!.value
};
}
logger.info("At this point Chromium's job is done, shutting it down...\n");
// await page.waitForTimeout(1000 * 60 * 60 * 60);
}
finally {
logger.verbose('Stream login browser closing...');
await browser.close();
logger.verbose('Stream login browser closed');
}
return session;
}

View File

@@ -1,14 +1,14 @@
import { StreamApiClient } from './ApiClient'; import { ApiClient } from './ApiClient';
import { StreamSession } from './Types'; import { Session } from './Types';
import terminalImage from 'terminal-image'; import terminalImage from 'terminal-image';
import { AxiosResponse } from 'axios'; import { AxiosResponse } from 'axios';
export async function drawThumbnail(posterImage: string, session: StreamSession): Promise<void> { export async function drawThumbnail(posterImage: string, session: Session): Promise<void> {
const apiClient: StreamApiClient = StreamApiClient.getInstance(session); const apiClient: ApiClient = ApiClient.getInstance(session);
const thumbnail: Buffer = await apiClient.callUrl(posterImage, 'get', null, 'arraybuffer') let thumbnail: Buffer = await apiClient.callUrl(posterImage, 'get', null, 'arraybuffer')
.then((response: AxiosResponse<any> | undefined) => response?.data); .then((response: AxiosResponse<any> | undefined) => response?.data);
console.log(await terminalImage.buffer(thumbnail, { width: 70 } )); console.log(await terminalImage.buffer(thumbnail, { width: 70 } ));

View File

@@ -2,7 +2,7 @@ import { chromeCacheFolder } from './destreamer';
import { ERROR_CODE } from './Errors'; import { ERROR_CODE } from './Errors';
import { logger } from './Logger'; import { logger } from './Logger';
import { getPuppeteerChromiumPath } from './PuppeteerHelper'; import { getPuppeteerChromiumPath } from './PuppeteerHelper';
import { StreamSession } from './Types'; import { Session } from './Types';
import fs from 'fs'; import fs from 'fs';
import jwtDecode from 'jwt-decode'; import jwtDecode from 'jwt-decode';
@@ -12,23 +12,23 @@ import puppeteer from 'puppeteer';
export class TokenCache { export class TokenCache {
private tokenCacheFile = '.token_cache'; private tokenCacheFile = '.token_cache';
public Read(): StreamSession | null { public Read(): Session | null {
if (!fs.existsSync(this.tokenCacheFile)) { if (!fs.existsSync(this.tokenCacheFile)) {
logger.warn(`${this.tokenCacheFile} not found. \n`); logger.warn(`${this.tokenCacheFile} not found. \n`);
return null; return null;
} }
const session: StreamSession = JSON.parse(fs.readFileSync(this.tokenCacheFile, 'utf8')); let session: Session = JSON.parse(fs.readFileSync(this.tokenCacheFile, 'utf8'));
type Jwt = { type Jwt = {
[key: string]: any [key: string]: any
} }
const decodedJwt: Jwt = jwtDecode(session.AccessToken); const decodedJwt: Jwt = jwtDecode(session.AccessToken);
const now: number = Math.floor(Date.now() / 1000); let now: number = Math.floor(Date.now() / 1000);
const exp: number = decodedJwt['exp']; let exp: number = decodedJwt['exp'];
const timeLeft: number = exp - now; let timeLeft: number = exp - now;
if (timeLeft < 120) { if (timeLeft < 120) {
logger.warn('Access token has expired! \n'); logger.warn('Access token has expired! \n');
@@ -41,20 +41,19 @@ export class TokenCache {
return session; return session;
} }
public Write(session: StreamSession): void { public Write(session: Session): void {
const s: string = JSON.stringify(session, null, 4); let s: string = JSON.stringify(session, null, 4);
fs.writeFile(this.tokenCacheFile, s, (err: any) => { fs.writeFile('.token_cache', s, (err: any) => {
if (err) { if (err) {
return logger.error(err); return logger.error(err);
} }
logger.info('Fresh access token dropped into .token_cachen \n'.green);
logger.info(`Fresh access token dropped into ${this.tokenCacheFile} \n`.green);
}); });
} }
} }
export async function refreshSession(url: string): Promise<StreamSession> { export async function refreshSession(url: string): Promise<Session> {
const videoId: string = url.split('/').pop() ?? process.exit(ERROR_CODE.INVALID_VIDEO_GUID); const videoId: string = url.split('/').pop() ?? process.exit(ERROR_CODE.INVALID_VIDEO_GUID);
const browser: puppeteer.Browser = await puppeteer.launch({ const browser: puppeteer.Browser = await puppeteer.launch({
@@ -73,7 +72,7 @@ export async function refreshSession(url: string): Promise<StreamSession> {
await browser.waitForTarget((target: puppeteer.Target) => target.url().includes(videoId), { timeout: 30000 }); await browser.waitForTarget((target: puppeteer.Target) => target.url().includes(videoId), { timeout: 30000 });
let session: StreamSession | null = null; let session: Session | null = null;
let tries = 1; let tries = 1;
while (!session) { while (!session) {

View File

@@ -1,34 +1,11 @@
export type StreamSession = { export type Session = {
AccessToken: string; AccessToken: string;
ApiGatewayUri: string; ApiGatewayUri: string;
ApiGatewayVersion: string; ApiGatewayVersion: string;
} }
export type ShareSession = {
FedAuth: string;
rtFa: string;
}
export type VideoUrl = {
url: string,
outDir: string
}
export type SharepointVideo = {
// if we can download the MP4 or we need to use DASH
direct: boolean;
playbackUrl: string;
title: string;
outPath: string
}
export type Video = { export type Video = {
guid?: string;
direct?: boolean;
title: string; title: string;
duration: string; duration: string;
publishDate: string; publishDate: string;
@@ -39,7 +16,7 @@ export type Video = {
outPath: string; outPath: string;
totalChunks: number; // Abstraction of FFmpeg timemark totalChunks: number; // Abstraction of FFmpeg timemark
playbackUrl: string; playbackUrl: string;
posterImageUrl?: string; posterImageUrl: string;
captionsUrl?: string captionsUrl?: string
} }

View File

@@ -1,62 +1,35 @@
import { StreamApiClient } from './ApiClient'; import { ApiClient } from './ApiClient';
import { ERROR_CODE } from './Errors'; import { ERROR_CODE } from './Errors';
import { logger } from './Logger'; import { logger } from './Logger';
import { StreamSession, VideoUrl } from './Types'; import { Session } from './Types';
import { AxiosResponse } from 'axios'; import { AxiosResponse } from 'axios';
import { execSync } from 'child_process'; import { execSync } from 'child_process';
import fs from 'fs'; import fs from 'fs';
import readlineSync from 'readline-sync';
const streamUrlRegex = new RegExp(/https?:\/\/web\.microsoftstream\.com.*/); async function extractGuids(url: string, client: ApiClient): Promise<Array<string> | null> {
const shareUrlRegex = new RegExp(/https?:\/\/.+\.sharepoint\.com.*/);
/** we place the guid in the url fild in the return */
export async function extractStreamGuids(urlList: Array<VideoUrl>, session: StreamSession): Promise<Array<VideoUrl>> {
const videoRegex = new RegExp(/https:\/\/.*\/video\/(\w{8}-(?:\w{4}-){3}\w{12})/); const videoRegex = new RegExp(/https:\/\/.*\/video\/(\w{8}-(?:\w{4}-){3}\w{12})/);
const groupRegex = new RegExp(/https:\/\/.*\/group\/(\w{8}-(?:\w{4}-){3}\w{12})/); const groupRegex = new RegExp(/https:\/\/.*\/group\/(\w{8}-(?:\w{4}-){3}\w{12})/);
const apiClient: StreamApiClient = StreamApiClient.getInstance(session); const videoMatch: RegExpExecArray | null = videoRegex.exec(url);
const guidList: Array<VideoUrl> = []; const groupMatch: RegExpExecArray | null = groupRegex.exec(url);
for (const url of urlList) { if (videoMatch) {
const videoMatch: RegExpExecArray | null = videoRegex.exec(url.url); return [videoMatch[1]];
const groupMatch: RegExpExecArray | null = groupRegex.exec(url.url); }
else if (groupMatch) {
const videoNumber: number = await client.callApi(`groups/${groupMatch[1]}`, 'get')
.then((response: AxiosResponse<any> | undefined) => response?.data.metrics.videos);
if (videoMatch) { let result: Array<string> = await client.callApi(`groups/${groupMatch[1]}/videos?$top=${videoNumber}&$orderby=publishedDate asc`, 'get')
guidList.push({ .then((response: AxiosResponse<any> | undefined) => response?.data.value.map((item: any) => item.id));
url: videoMatch[1],
outDir: url.outDir
});
}
else if (groupMatch) {
const videoNumber: number = await apiClient.callApi(`groups/${groupMatch[1]}`, 'get')
.then((response: AxiosResponse<any> | undefined) => response?.data.metrics.videos);
// Anything above $top=100 results in 400 Bad Request return result;
// Use $skip to skip the first 100 and get another 100 and so on
for (let index = 0; index <= Math.floor(videoNumber / 100); index++) {
await apiClient.callApi(
`groups/${groupMatch[1]}/videos?$skip=${100 * index}&` +
'$top=100&$orderby=publishedDate asc', 'get'
).then((response: AxiosResponse<any> | undefined) => {
response?.data.value.forEach((video: { id: string }) =>
guidList.push({
url: video.id,
outDir: url.outDir
})
);
});
}
}
else {
logger.warn(`Invalid url '${url.url}', skipping...`);
}
} }
return guidList; return null;
} }
@@ -67,32 +40,30 @@ export async function extractStreamGuids(urlList: Array<VideoUrl>, session: Stre
* *
* @param {Array<string>} urlList list of link to parse * @param {Array<string>} urlList list of link to parse
* @param {string} defaultOutDir the directry used to save the videos * @param {string} defaultOutDir the directry used to save the videos
* @param {Session} session used to call the API to get the GUIDs from group links
* *
* @returns Array of 2 elements: 1st an array of Microsoft Stream urls, 2nd an array of SharePoint urls * @returns Array of 2 elements, 1st one being the GUIDs array, 2nd one the output directories array
*/ */
export function parseCLIinput(urlList: Array<string>, defaultOutDir: string): Array<Array<VideoUrl>> { export async function parseCLIinput(urlList: Array<string>, defaultOutDir: string,
const stream: Array<VideoUrl> = []; session: Session): Promise<Array<Array<string>>> {
const share: Array<VideoUrl> = [];
const apiClient: ApiClient = ApiClient.getInstance(session);
let guidList: Array<string> = [];
for (const url of urlList) { for (const url of urlList) {
if (streamUrlRegex.test(url)) { const guids: Array<string> | null = await extractGuids(url, apiClient);
stream.push({
url: url, if (guids) {
outDir: defaultOutDir guidList.push(...guids);
});
}
else if (shareUrlRegex.test(url)) {
share.push({
url: url,
outDir: defaultOutDir
});
} }
else { else {
logger.warn(`Invalid url '${url}', skipping..`); logger.warn(`Invalid url '${url}', skipping..`);
} }
} }
return [stream, share]; const outDirList: Array<string> = Array(guidList.length).fill(defaultOutDir);
return [guidList, outDirList];
} }
@@ -103,91 +74,101 @@ export function parseCLIinput(urlList: Array<string>, defaultOutDir: string): Ar
* *
* @param {string} inputFile path to the text file * @param {string} inputFile path to the text file
* @param {string} defaultOutDir the default/fallback directory used to save the videos * @param {string} defaultOutDir the default/fallback directory used to save the videos
* @param {Session} session used to call the API to get the GUIDs from group links
* *
* @returns Array of 2 elements, 1st one being the GUIDs array, 2nd one the output directories array * @returns Array of 2 elements, 1st one being the GUIDs array, 2nd one the output directories array
*/ */
export function parseInputFile(inputFile: string, defaultOutDir: string): Array<Array<VideoUrl>> { export async function parseInputFile(inputFile: string, defaultOutDir: string,
session: Session): Promise<Array<Array<string>>> {
// rawContent is a list of each line of the file // rawContent is a list of each line of the file
const rawContent: Array<string> = fs.readFileSync(inputFile).toString().split(/\r?\n/); const rawContent: Array<string> = fs.readFileSync(inputFile).toString()
const stream: Array<VideoUrl> = []; .split(/\r?\n/);
const share: Array<VideoUrl> = []; const apiClient: ApiClient = ApiClient.getInstance(session);
let streamUrl = false;
let guidList: Array<string> = [];
let outDirList: Array<string> = [];
// if the last line was an url set this
let foundUrl = false;
for (let i = 0; i < rawContent.length; i++) { for (let i = 0; i < rawContent.length; i++) {
const line: string = rawContent[i]; const line: string = rawContent[i];
const nextLine: string | null = i < rawContent.length ? rawContent[i + 1] : null;
let outDir = defaultOutDir;
// filter out lines with no content // filter out lines with no content
if (!line.match(/\S/)) { if (!line.match(/\S/)) {
logger.warn(`Line ${i + 1} is empty, skipping..`); logger.warn(`Line ${i + 1} is empty, skipping..`);
continue; continue;
} }
// check for urls // parse if line is option
else if (streamUrlRegex.test(line)) { else if (line.includes('-dir')) {
streamUrl = true; if (foundUrl) {
} let outDir: string | null = parseOption('-dir', line);
else if (shareUrlRegex.test(line)) {
streamUrl = false;
}
// now invalid line since we skip ahead one line if we find dir option
else {
logger.warn(`Line ${i + 1}: '${line}' is invalid, skipping..`);
if (outDir && checkOutDir(outDir)) {
outDirList.push(...Array(guidList.length - outDirList.length)
.fill(outDir));
}
else {
outDirList.push(...Array(guidList.length - outDirList.length)
.fill(defaultOutDir));
}
foundUrl = false;
continue;
}
else {
logger.warn(`Found options without preceding url at line ${i + 1}, skipping..`);
continue; continue;
}
// we now have a valid url, check next line for option
if (nextLine) {
const optionDir = parseOption('-dir', nextLine);
if (optionDir && makeOutDir(optionDir)) {
outDir = optionDir;
// if there was an option we skip a line
i++;
} }
} }
if (streamUrl) { /* now line is not empty nor an option line.
stream.push({ If foundUrl is still true last line didn't have a directory option
url: line, so we stil need to add the default outDir to outDirList to */
outDir if (foundUrl) {
}); outDirList.push(...Array(guidList.length - outDirList.length)
.fill(defaultOutDir));
foundUrl = false;
}
const guids: Array<string> | null = await extractGuids(line, apiClient);
if (guids) {
guidList.push(...guids);
foundUrl = true;
} }
else { else {
share.push({ logger.warn(`Invalid url at line ${i + 1}, skipping..`);
url: line,
outDir
});
} }
} }
// if foundUrl is still true after the loop we have some url without an outDir
if (foundUrl) {
outDirList.push(...Array(guidList.length - outDirList.length)
.fill(defaultOutDir));
}
return [stream, share]; return [guidList, outDirList];
} }
// This leaves us the option to add more options (badum tss) _Luca // This leaves us the option to add more options (badum tss) _Luca
function parseOption(optionSyntax: string, item: string): string | null { function parseOption(optionSyntax: string, item: string): string | null {
const match: RegExpMatchArray | null = item.match( const match: RegExpMatchArray | null = item.match(
RegExp(`^\\s+${optionSyntax}\\s*=\\s*['"](.*)['"]`) RegExp(`^\\s*${optionSyntax}\\s?=\\s?['"](.*)['"]`)
); );
return match ? match[1] : null; return match ? match[1] : null;
} }
/**
* @param directory path to create export function checkOutDir(directory: string): boolean {
* @returns true on success, false otherwise
*/
export function makeOutDir(directory: string): boolean {
if (!fs.existsSync(directory)) { if (!fs.existsSync(directory)) {
try { try {
fs.mkdirSync(directory); fs.mkdirSync(directory);
logger.info('\nCreated directory: '.yellow + directory); logger.info('\nCreated directory: '.yellow + directory);
} }
catch (e) { catch (e) {
logger.warn('Cannot create directory: ' + directory + logger.warn('Cannot create directory: '+ directory +
'\nFalling back to default directory..'); '\nFalling back to default directory..');
return false; return false;
@@ -200,52 +181,20 @@ export function makeOutDir(directory: string): boolean {
export function checkRequirements(): void { export function checkRequirements(): void {
try { try {
const copyrightYearRe = new RegExp(/\d{4}-(\d{4})/);
const ffmpegVer: string = execSync('ffmpeg -version').toString().split('\n')[0]; const ffmpegVer: string = execSync('ffmpeg -version').toString().split('\n')[0];
if (parseInt(copyrightYearRe.exec(ffmpegVer)?.[1] ?? '0') <= 2019) {
process.exit(ERROR_CODE.OUTDATED_FFMPEG);
}
logger.verbose(`Using ${ffmpegVer}\n`); logger.verbose(`Using ${ffmpegVer}\n`);
} }
catch (e) { catch (e) {
process.exit(ERROR_CODE.MISSING_FFMPEG); process.exit(ERROR_CODE.MISSING_FFMPEG);
} }
try {
const versionRegex = new RegExp(/aria2 version (.*)/);
const aira2Ver: string = execSync('aria2c --version').toString().split('\n')[0];
if (versionRegex.test(aira2Ver)) {
logger.verbose(`Using ${aira2Ver}\n`);
}
else {
throw new Error();
}
}
catch (e) {
process.exit(ERROR_CODE.MISSING_ARIA2);
}
} }
// number of seconds
export function ffmpegTimemarkToChunk(timemark: string): number { export function ffmpegTimemarkToChunk(timemark: string): number {
const timeVals: Array<string> = timemark.split(':'); const timeVals: Array<string> = timemark.split(':');
const hrs: number = parseInt(timeVals[0]); const hrs: number = parseInt(timeVals[0]);
const mins: number = parseInt(timeVals[1]); const mins: number = parseInt(timeVals[1]);
const secs: number = parseInt(timeVals[2]); const secs: number = parseInt(timeVals[2]);
return (hrs * 60 * 60) + (mins * 60) + secs; return (hrs * 60) + mins + (secs / 60);
}
export function promptUser(choices: Array<string>): number {
const index: number = readlineSync.keyInSelect(choices, 'Which resolution/format do you prefer?');
if (index === -1) {
process.exit(ERROR_CODE.CANCELLED_USER_INPUT);
}
return index;
} }

View File

@@ -1,16 +1,15 @@
import { StreamApiClient } from './ApiClient'; import { ApiClient } from './ApiClient';
import { promptUser } from './Utils'; import { promptUser } from './CommandLineParser';
import { logger } from './Logger'; import { logger } from './Logger';
import { Video, StreamSession, VideoUrl } from './Types'; import { Video, Session } from './Types';
import { AxiosResponse } from 'axios'; import { AxiosResponse } from 'axios';
import fs from 'fs'; import fs from 'fs';
import { parse as parseDuration, Duration } from 'iso8601-duration'; import { parse as parseDuration, Duration } from 'iso8601-duration';
import path from 'path'; import path from 'path';
import sanitizeWindowsName from 'sanitize-filename'; import sanitizeWindowsName from 'sanitize-filename';
import { extractStreamGuids } from './Utils';
export function publishedDateToString(date: string): string { function publishedDateToString(date: string): string {
const dateJs: Date = new Date(date); const dateJs: Date = new Date(date);
const day: string = dateJs.getDate().toString().padStart(2, '0'); const day: string = dateJs.getDate().toString().padStart(2, '0');
const month: string = (dateJs.getMonth() + 1).toString(10).padStart(2, '0'); const month: string = (dateJs.getMonth() + 1).toString(10).padStart(2, '0');
@@ -18,45 +17,36 @@ export function publishedDateToString(date: string): string {
return `${dateJs.getFullYear()}-${month}-${day}`; return `${dateJs.getFullYear()}-${month}-${day}`;
} }
export function publishedTimeToString(seconds: number): string
export function publishedTimeToString(date: string): string
export function publishedTimeToString(date: string | number): string {
let dateJs: Date;
if (typeof (date) === 'number') {
dateJs = new Date(0, 0, 0, 0, 0, date);
}
else {
dateJs = new Date(date);
}
function publishedTimeToString(date: string): string {
const dateJs: Date = new Date(date);
const hours: string = dateJs.getHours().toString(); const hours: string = dateJs.getHours().toString();
const minutes: string = dateJs.getMinutes().toString(); const minutes: string = dateJs.getMinutes().toString();
const seconds: string = dateJs.getSeconds().toString(); const seconds: string = dateJs.getSeconds().toString();
return `${hours}h ${minutes}m ${seconds}s`; return `${hours}:${minutes}:${seconds}`;
} }
export function isoDurationToString(time: string): string { function isoDurationToString(time: string): string {
const duration: Duration = parseDuration(time); const duration: Duration = parseDuration(time);
return `${duration.hours ?? '00'}.${duration.minutes ?? '00'}.${duration.seconds?.toFixed(0) ?? '00'}`; return `${duration.hours ?? '00'}:${duration.minutes ?? '00'}:${duration.seconds?.toFixed(0) ?? '00'}`;
} }
// it's the number of seconds in the video
export function durationToTotalChunks(duration: string,): number { function durationToTotalChunks(duration: string): number {
const durationObj: any = parseDuration(duration); const durationObj: any = parseDuration(duration);
const hrs: number = durationObj.hours ?? 0; const hrs: number = durationObj.hours ?? 0;
const mins: number = durationObj.minutes ?? 0; const mins: number = durationObj.minutes ?? 0;
const secs: number = Math.ceil(durationObj.seconds ?? 0); const secs: number = Math.ceil(durationObj.seconds ?? 0);
return (hrs * 60 * 60) + (mins * 60) + secs; return (hrs * 60) + mins + (secs / 60);
} }
export async function getStreamInfo(videoUrls: Array<VideoUrl>, session: StreamSession, subtitles?: boolean): Promise<Array<Video>> { export async function getVideoInfo(videoGuid: string, session: Session, subtitles?: boolean): Promise<Video> {
const metadata: Array<Video> = []; // template elements
let title: string; let title: string;
let duration: string; let duration: string;
let publishDate: string; let publishDate: string;
@@ -64,130 +54,101 @@ export async function getStreamInfo(videoUrls: Array<VideoUrl>, session: StreamS
let author: string; let author: string;
let authorEmail: string; let authorEmail: string;
let uniqueId: string; let uniqueId: string;
// final video path (here for consistency with typedef)
const outPath = '';
// ffmpeg magic (abstraction of FFmpeg timemark)
let totalChunks: number; let totalChunks: number;
// various sources
let playbackUrl: string; let playbackUrl: string;
let posterImageUrl: string; let posterImageUrl: string;
let captionsUrl: string | undefined; let captionsUrl: string | undefined;
const apiClient: StreamApiClient = StreamApiClient.getInstance(session); const apiClient: ApiClient = ApiClient.getInstance(session);
let response: AxiosResponse<any> | undefined =
await apiClient.callApi('videos/' + videoGuid + '?$expand=creator', 'get');
// we place the guid in the url field title = sanitizeWindowsName(response?.data['name']);
const videoGUIDs = await extractStreamGuids(videoUrls, session);
duration = isoDurationToString(response?.data.media['duration']);
/* TODO: change this to a single guid at a time to ease our footprint on the publishDate = publishedDateToString(response?.data['publishedDate']);
MSS servers or we get throttled after 10 sequential reqs */
for (const guid of videoGUIDs) {
const response: AxiosResponse<any> | undefined =
await apiClient.callApi('videos/' + guid.url + '?$expand=creator', 'get');
title = sanitizeWindowsName(response?.data['name']); publishTime = publishedTimeToString(response?.data['publishedDate']);
duration = isoDurationToString(response?.data.media['duration']); author = response?.data['creator'].name;
publishDate = publishedDateToString(response?.data['publishedDate']); authorEmail = response?.data['creator'].mail;
publishTime = publishedTimeToString(response?.data['publishedDate']); uniqueId = '#' + videoGuid.split('-')[0];
author = response?.data['creator'].name; totalChunks = durationToTotalChunks(response?.data.media['duration']);
authorEmail = response?.data['creator'].mail; playbackUrl = response?.data['playbackUrls']
.filter((item: { [x: string]: string; }) =>
item['mimeType'] == 'application/vnd.apple.mpegurl')
.map((item: { [x: string]: string }) => {
return item['playbackUrl'];
})[0];
uniqueId = '#' + guid.url.split('-')[0]; posterImageUrl = response?.data['posterImage']['medium']['url'];
totalChunks = durationToTotalChunks(response?.data.media['duration']); if (subtitles) {
let captions: AxiosResponse<any> | undefined = await apiClient.callApi(`videos/${videoGuid}/texttracks`, 'get');
playbackUrl = response?.data['playbackUrls'] if (!captions?.data.value.length) {
.filter((item: { [x: string]: string; }) => captionsUrl = undefined;
item['mimeType'] == 'application/vnd.apple.mpegurl') }
.map((item: { [x: string]: string }) => { else if (captions?.data.value.length === 1) {
return item['playbackUrl']; logger.info(`Found subtitles for ${title}. \n`);
})[0]; captionsUrl = captions?.data.value.pop().url;
}
posterImageUrl = response?.data['posterImage']['medium']['url']; else {
const index: number = promptUser(captions.data.value.map((item: { language: string; autoGenerated: string; }) => {
if (subtitles) { return `[${item.language}] autogenerated: ${item.autoGenerated}`;
const captions: AxiosResponse<any> | undefined = await apiClient.callApi(`videos/${guid.url}/texttracks`, 'get'); }));
captionsUrl = captions.data.value[index].url;
if (!captions?.data.value.length) {
captionsUrl = undefined;
}
else if (captions?.data.value.length === 1) {
logger.info(`Found subtitles for ${title}. \n`);
captionsUrl = captions?.data.value.pop().url;
}
else {
const index: number = promptUser(captions.data.value.map((item: { language: string; autoGenerated: string; }) => {
return `[${item.language}] autogenerated: ${item.autoGenerated}`;
}));
captionsUrl = captions.data.value[index].url;
}
} }
metadata.push({
guid: guid.url,
title,
duration,
publishDate,
publishTime,
author,
authorEmail,
uniqueId,
outPath: guid.outDir,
totalChunks, // Abstraction of FFmpeg timemark
playbackUrl,
posterImageUrl,
captionsUrl
});
} }
return metadata; return {
title: title,
duration: duration,
publishDate: publishDate,
publishTime: publishTime,
author: author,
authorEmail: authorEmail,
uniqueId: uniqueId,
outPath: outPath,
totalChunks: totalChunks,
playbackUrl: playbackUrl,
posterImageUrl: posterImageUrl,
captionsUrl: captionsUrl
};
} }
export function createUniquePath(videos: Array<Video>, template: string, format: string, skip?: boolean): Array<Video> export function createUniquePath(video: Video, outDir: string, template: string, format: string, skip?: boolean): Video {
export function createUniquePath(videos: Video, template: string, format: string, skip?: boolean): Video
export function createUniquePath(videos: Array<Video> | Video, template: string, format: string, skip?: boolean): Array<Video> | Video {
let singleInput = false;
if (!Array.isArray(videos)) { let title: string = template;
videos = [videos]; let finalTitle: string;
singleInput = true; const elementRegEx = RegExp(/{(.*?)}/g);
let match = elementRegEx.exec(template);
while (match) {
let value = video[match[1] as keyof Video] as string;
title = title.replace(match[0], value);
match = elementRegEx.exec(template);
} }
videos.forEach((video: Video) => { let i = 0;
let title: string = template; finalTitle = title;
let finalTitle: string;
const elementRegEx = RegExp(/{(.*?)}/g);
let match = elementRegEx.exec(template);
while (match) { while (!skip && fs.existsSync(path.join(outDir, finalTitle + '.' + format))) {
const value = video[match[1] as keyof (Video)] as string; finalTitle = `${title}.${++i}`;
title = title.replace(match[0], value);
match = elementRegEx.exec(template);
}
let i = 0;
finalTitle = title;
while (!skip && fs.existsSync(path.join(video.outPath, finalTitle + '.' + format))) {
finalTitle = `${title}.${++i}`;
}
const finalFileName = `${finalTitle}.${format}`;
const cleanFileName = sanitizeWindowsName(finalFileName, { replacement: '_' });
if (finalFileName !== cleanFileName) {
logger.warn(`Not a valid Windows file name: "${finalFileName}".\nReplacing invalid characters with underscores to preserve cross-platform consistency.`);
}
video.outPath = path.join(video.outPath, finalFileName);
});
if (singleInput) {
return videos[0];
} }
return videos; video.outPath = path.join(outDir, finalTitle + '.' + format);
return video;
} }

View File

@@ -2,13 +2,21 @@ import { argv } from './CommandLineParser';
import { ERROR_CODE } from './Errors'; import { ERROR_CODE } from './Errors';
import { setProcessEvents } from './Events'; import { setProcessEvents } from './Events';
import { logger } from './Logger'; import { logger } from './Logger';
import { VideoUrl } from './Types'; import { getPuppeteerChromiumPath } from './PuppeteerHelper';
import { checkRequirements, parseInputFile, parseCLIinput } from './Utils'; import { drawThumbnail } from './Thumbnail';
import { TokenCache, refreshSession } from './TokenCache';
import { Video, Session } from './Types';
import { checkRequirements, ffmpegTimemarkToChunk, parseInputFile, parseCLIinput} from './Utils';
import { getVideoInfo, createUniquePath } from './VideoUtils';
import cliProgress from 'cli-progress';
import fs from 'fs';
import isElevated from 'is-elevated'; import isElevated from 'is-elevated';
import { downloadShareVideo, downloadStreamVideo } from './Downloaders'; import puppeteer from 'puppeteer';
const { FFmpegCommand, FFmpegInput, FFmpegOutput } = require('@tedconf/fessonia')();
const tokenCache: TokenCache = new TokenCache();
export const chromeCacheFolder = '.chrome_data'; export const chromeCacheFolder = '.chrome_data';
@@ -35,32 +43,255 @@ async function init(): Promise<void> {
} }
async function main(): Promise<void> { async function DoInteractiveLogin(url: string, username?: string): Promise<Session> {
await init(); // must be first
let streamVideos: Array<VideoUrl>, shareVideos: Array<VideoUrl>;
if (argv.videoUrls) { logger.info('Launching headless Chrome to perform the OpenID Connect dance...');
logger.info('Parsing video/group urls');
[streamVideos, shareVideos] = await parseCLIinput(argv.videoUrls as Array<string>, argv.outputDirectory); const browser: puppeteer.Browser = await puppeteer.launch({
executablePath: getPuppeteerChromiumPath(),
headless: false,
userDataDir: (argv.keepLoginCookies) ? chromeCacheFolder : undefined,
args: [
'--disable-dev-shm-usage',
'--fast-start',
'--no-sandbox'
]
});
const page: puppeteer.Page = (await browser.pages())[0];
logger.info('Navigating to login page...');
await page.goto(url, { waitUntil: 'load' });
try {
if (username) {
await page.waitForSelector('input[type="email"]', {timeout: 3000});
await page.keyboard.type(username);
await page.click('input[type="submit"]');
}
else {
/* If a username was not provided we let the user take actions that
lead up to the video page. */
}
} }
else { catch (e) {
logger.info('Parsing input file'); /* If there is no email input selector we aren't in the login module,
[streamVideos, shareVideos] = await parseInputFile(argv.inputFile!, argv.outputDirectory); we are probably using the cache to aid the login.
It could finish the login on its own if the user said 'yes' when asked to
remember the credentials or it could still prompt the user for a password */
} }
logger.verbose( await browser.waitForTarget((target: puppeteer.Target) => target.url().endsWith('microsoftstream.com/'), { timeout: 150000 });
'List of urls and corresponding output directory \n' + logger.info('We are logged in.');
streamVideos.map(video => `\t${video.url} => ${video.outDir} \n`).join('') +
shareVideos.map(video => `\t${video.url} => ${video.outDir} \n`).join('')
);
if (streamVideos.length) { let session: Session | null = null;
await downloadStreamVideo(streamVideos); let tries = 1;
while (!session) {
try {
let sessionInfo: any;
session = await page.evaluate(
() => {
return {
AccessToken: sessionInfo.AccessToken,
ApiGatewayUri: sessionInfo.ApiGatewayUri,
ApiGatewayVersion: sessionInfo.ApiGatewayVersion
};
}
);
}
catch (error) {
if (tries > 5) {
process.exit(ERROR_CODE.NO_SESSION_INFO);
}
session = null;
tries++;
await page.waitFor(3000);
}
} }
if (shareVideos.length) {
await downloadShareVideo(shareVideos); tokenCache.Write(session);
logger.info('Wrote access token to token cache.');
logger.info("At this point Chromium's job is done, shutting it down...\n");
await browser.close();
return session;
}
async function downloadVideo(videoGuidArray: Array<string>, outputDirectoryArray: Array<string>, session: Session): Promise<void> {
for (const [index, videoGuid] of videoGuidArray.entries()) {
logger.info(`Fetching video's #${index} info... \n`);
const video: Video = createUniquePath (
await getVideoInfo(videoGuid, session, argv.closedCaptions),
outputDirectoryArray[index], argv.outputTemplate, argv.format, argv.skip
);
if (argv.simulate) {
logger.info(
'\nTitle: '.green + video.title +
'\nOutPath: '.green + video.outPath +
'\nPublished Date: '.green + video.publishDate +
'\nPlayback URL: '.green + video.playbackUrl +
((video.captionsUrl) ? ('\nCC URL: '.green + video.captionsUrl) : '')
);
continue;
}
if (argv.skip && fs.existsSync(video.outPath)) {
logger.info(`File already exists, skipping: ${video.outPath} \n`);
continue;
}
if (argv.keepLoginCookies && index !== 0) {
logger.info('Trying to refresh token...');
session = await refreshSession('https://web.microsoftstream.com/video/' + videoGuidArray[index]);
}
const pbar: cliProgress.SingleBar = new cliProgress.SingleBar({
barCompleteChar: '\u2588',
barIncompleteChar: '\u2591',
format: 'progress [{bar}] {percentage}% {speed} {eta_formatted}',
// process.stdout.columns may return undefined in some terminals (Cygwin/MSYS)
barsize: Math.floor((process.stdout.columns || 30) / 3),
stopOnComplete: true,
hideCursor: true,
});
logger.info(`\nDownloading Video: ${video.title} \n`);
logger.verbose('Extra video info \n' +
'\t Video m3u8 playlist URL: '.cyan + video.playbackUrl + '\n' +
'\t Video tumbnail URL: '.cyan + video.posterImageUrl + '\n' +
'\t Video subtitle URL (may not exist): '.cyan + video.captionsUrl + '\n' +
'\t Video total chunks: '.cyan + video.totalChunks + '\n');
logger.info('Spawning ffmpeg with access token and HLS URL. This may take a few seconds...\n\n');
if (!process.stdout.columns) {
logger.warn(
'Unable to get number of columns from terminal.\n' +
'This happens sometimes in Cygwin/MSYS.\n' +
'No progress bar can be rendered, however the download process should not be affected.\n\n' +
'Please use PowerShell or cmd.exe to run destreamer on Windows.'
);
}
const headers: string = 'Authorization: Bearer ' + session.AccessToken;
if (!argv.noExperiments) {
await drawThumbnail(video.posterImageUrl, session);
}
const ffmpegInpt: any = new FFmpegInput(video.playbackUrl, new Map([
['headers', headers]
]));
const ffmpegOutput: any = new FFmpegOutput(video.outPath, new Map([
argv.acodec === 'none' ? ['an', null] : ['c:a', argv.acodec],
argv.vcodec === 'none' ? ['vn', null] : ['c:v', argv.vcodec],
['n', null]
]));
const ffmpegCmd: any = new FFmpegCommand();
const cleanupFn: () => void = () => {
pbar.stop();
if (argv.noCleanup) {
return;
}
try {
fs.unlinkSync(video.outPath);
}
catch (e) {
// Future handling of an error (maybe)
}
};
pbar.start(video.totalChunks, 0, {
speed: '0'
});
// prepare ffmpeg command line
ffmpegCmd.addInput(ffmpegInpt);
ffmpegCmd.addOutput(ffmpegOutput);
if (argv.closedCaptions && video.captionsUrl) {
const captionsInpt: any = new FFmpegInput(video.captionsUrl, new Map([
['headers', headers]
]));
ffmpegCmd.addInput(captionsInpt);
}
ffmpegCmd.on('update', async (data: any) => {
const currentChunks: number = ffmpegTimemarkToChunk(data.out_time);
pbar.update(currentChunks, {
speed: data.bitrate
});
// Graceful fallback in case we can't get columns (Cygwin/MSYS)
if (!process.stdout.columns) {
process.stdout.write(`--- Speed: ${data.bitrate}, Cursor: ${data.out_time}\r`);
}
});
process.on('SIGINT', cleanupFn);
// let the magic begin...
await new Promise((resolve: any) => {
ffmpegCmd.on('error', (error: any) => {
cleanupFn();
logger.error(`FFmpeg returned an error: ${error.message}`);
process.exit(ERROR_CODE.UNK_FFMPEG_ERROR);
});
ffmpegCmd.on('success', () => {
pbar.update(video.totalChunks); // set progress bar to 100%
logger.info(`\nDownload finished: ${video.outPath} \n`);
resolve();
});
ffmpegCmd.spawn();
});
process.removeListener('SIGINT', cleanupFn);
} }
} }
async function main(): Promise<void> {
await init(); // must be first
let session: Session;
session = tokenCache.Read() ?? await DoInteractiveLogin('https://web.microsoftstream.com/', argv.username);
logger.verbose('Session and API info \n' +
'\t API Gateway URL: '.cyan + session.ApiGatewayUri + '\n' +
'\t API Gateway version: '.cyan + session.ApiGatewayVersion + '\n');
let videoGUIDs: Array<string>;
let outDirs: Array<string>;
if (argv.videoUrls) {
logger.info('Parsing video/group urls');
[videoGUIDs, outDirs] = await parseCLIinput(argv.videoUrls as Array<string>, argv.outputDirectory, session);
}
else {
logger.info('Parsing input file');
[videoGUIDs, outDirs] = await parseInputFile(argv.inputFile!, argv.outputDirectory, session);
}
logger.verbose('List of GUIDs and corresponding output directory \n' +
videoGUIDs.map((guid: string, i: number) =>
`\thttps://web.microsoftstream.com/video/${guid} => ${outDirs[i]} \n`).join(''));
downloadVideo(videoGUIDs, outDirs, session);
}
main(); main();

View File

@@ -1,15 +1,32 @@
import { extractStreamGuids, parseInputFile } from '../src/Utils'; import { parseInputFile } from '../src/Utils';
import puppeteer from 'puppeteer';
import assert from 'assert'; import assert from 'assert';
import tmp from 'tmp'; import tmp from 'tmp';
import fs from 'fs'; import fs from 'fs';
import { StreamSession, VideoUrl } from './Types'; import { Session } from './Types';
describe('Puppeteer', () => {
it('should grab GitHub page title', async () => {
const browser = await puppeteer.launch({
headless: true,
args: ['--disable-dev-shm-usage', '--fast-start', '--no-sandbox']
});
const page = await browser.newPage();
await page.goto('https://github.com/', { waitUntil: 'load' });
let pageTitle = await page.title();
assert.equal(true, pageTitle.includes('GitHub'));
await browser.close();
}).timeout(30000); // yeah, this may take a while...
});
// we cannot test groups parsing as that requires an actual session
// TODO: add SharePoint urls
describe('Destreamer parsing', () => { describe('Destreamer parsing', () => {
it('Input file to arrays of guids', async () => { it('Input file to arrays of URLs and DIRs', async () => {
const testSession: StreamSession = { const testSession: Session = {
AccessToken: '', AccessToken: '',
ApiGatewayUri: '', ApiGatewayUri: '',
ApiGatewayVersion: '' ApiGatewayVersion: ''
@@ -27,42 +44,33 @@ describe('Destreamer parsing', () => {
'https://web.microsoftstream.com/video/xxxxxx-gggg-xxxx-xxxx-xxxxxxxxxxxx', 'https://web.microsoftstream.com/video/xxxxxx-gggg-xxxx-xxxx-xxxxxxxxxxxx',
'' ''
]; ];
const expectedGUIDsOut: Array<string> = [
const expectedStreamOut: Array<VideoUrl> = [ 'xxxxxxxx-aaaa-xxxx-xxxx-xxxxxxxxxxxx',
{ 'xxxxxxxx-bbbb-xxxx-xxxx-xxxxxxxxxxxx',
url: 'xxxxxxxx-aaaa-xxxx-xxxx-xxxxxxxxxxxx', 'xxxxxxxx-cccc-xxxx-xxxx-xxxxxxxxxxxx',
outDir: 'videos' 'xxxxxxxx-dddd-xxxx-xxxx-xxxxxxxxxxxx',
}, 'xxxxxxxx-eeee-xxxx-xxxx-xxxxxxxxxxxx'
{ ];
url: 'xxxxxxxx-bbbb-xxxx-xxxx-xxxxxxxxxxxx', const expectedDirOut: Array<string> = [
outDir: 'luca' 'videos',
}, 'luca',
{ 'videos',
url: 'xxxxxxxx-cccc-xxxx-xxxx-xxxxxxxxxxxx', 'videos',
outDir: 'videos' 'videos'
},
{
url: 'xxxxxxxx-dddd-xxxx-xxxx-xxxxxxxxxxxx',
outDir: 'videos'
},
{
url: 'xxxxxxxx-eeee-xxxx-xxxx-xxxxxxxxxxxx',
outDir: 'videos'
},
]; ];
const tmpFile = tmp.fileSync({ postfix: '.txt' }); const tmpFile = tmp.fileSync({ postfix: '.txt' });
fs.writeFileSync(tmpFile.fd, testIn.join('\r\n')); fs.writeFileSync(tmpFile.fd, testIn.join('\r\n'));
const [testUrlOut , testDirOut]: Array<Array<string>> = await parseInputFile(tmpFile.name, 'videos', testSession);
const [testStreamUrls]: Array<Array<VideoUrl>> = parseInputFile(tmpFile.name, 'videos'); if (testUrlOut.length !== expectedGUIDsOut.length) {
throw "Expected url list and test list don't have the same number of elements".red;
assert.deepStrictEqual( }
await extractStreamGuids(testStreamUrls, testSession), else if (testDirOut.length !== expectedDirOut.length) {
expectedStreamOut, throw "Expected dir list and test list don't have the same number of elements".red;
'Error in parsing the URLs, missmatch between test and expected'.red }
); assert.deepStrictEqual(testUrlOut, expectedGUIDsOut,
// assert.deepStrictEqual(testUrlOut, expectedGUIDsOut, 'Error in parsing the URLs, missmatch between test and expected'.red);
// 'Error in parsing the DIRs, missmatch between test and expected'.red); assert.deepStrictEqual(testUrlOut, expectedGUIDsOut,
'Error in parsing the DIRs, missmatch between test and expected'.red);
assert.ok('Parsing of input file ok'); assert.ok('Parsing of input file ok');
}); });
}); });