Uploading to PeerTube from the command line

Andy Balaam from Andy Balaam's Blog

PeerTube’s API documentation gives an example of how to upload a video, but it is missing a couple of important aspects, most notably how to provide multiple tags use form-encoded input, so my more complete script is below. Use it like this:

# First, make sure jq is installed
echo "myusername" > username
echo "mypassword" > password
./upload "video_file.mp4"

Downsides:

  1. Your username and password are visible via ps to users on the same machine (tips to avoid this are welcome)
  2. I can’t work out how to include newlines in the video description (again, tips welcome)

You will need to edit the script to provide your own PeerTube server URL, channel ID (a number), video description, tags etc. Output and errors from the script will be placed in curl-out.txt. Read the API docs to see what numbers you need to use for category, license etc.

Here is upload:

#!/bin/bash

set -e
set -u

USERNAME="$(cat username)"
PASSWORD="$(cat password)"
FILE_PATH="$1"
CHANNEL_ID=MY_CHANNEL_ID_EG_1234
NAME="${FILE_PATH%.*}"
NAME="${NAME#*/}"

API_PATH="https://MY_PEERTUBE_SERVER_URL/api/v1"
## AUTH
client_id=$(curl -s "${API_PATH}/oauth-clients/local" | jq -r ".client_id")
client_secret=$(curl -s "${API_PATH}/oauth-clients/local" | jq -r ".client_secret")
token=$(curl -s "${API_PATH}/users/token" \
  --data client_id="${client_id}" \
  --data client_secret="${client_secret}" \
  --data grant_type=password \
  --data response_type=code \
  --data username="${USERNAME}" \
  --data password="${PASSWORD}" \
  | jq -r ".access_token")

echo "Uploading ${FILE_PATH}"
curl "${API_PATH}/videos/upload" \
  -H "Authorization: Bearer ${token}" \
  --output curl-out.txt \
  --max-time 6000 \
  --form videofile=@"${FILE_PATH}" \
  --form channelId=${CHANNEL_ID} \
  --form name="$NAME" \
  --form category=15 \
  --form licence=7 \
  --form description="MY_VIDEO_DESCRIPTION" \
  --form language=en \
  --form privacy=1 \
  --form tags="TAG1" \
  --form tags="TAG2" \
  --form tags="TAG3" \
  --form tags="TAG4"

Developer becoming a product owner/product manager?

Allan Kelly from Allan Kelly Associates

Product Owner choosing postits

A few weeks back I had an e-mail exchange with a blog reader about the product owner role which I think other readers might be interested in, it is a question that comes up regularly with clients. In this context the product owner is a product manager (regular readers know I consider product manager to be a subset of product owner).

Reader: This makes me think that the [product owner/manager] role is indeed super hard. Do you have a view on hiring versus training internally?

I’ve had great success with moving people from development into product owner/manager roles – I did it myself once upon a time. And I remember one developer who’s face lit up when I asked if he would like to move to a product role. A few years later – and several companies on – I got an e-mail from him to say how his career had flourished.

When to many the move looks obvious it is actually far harder than it looks and there are pitfalls.

The key thing is: the individual needs to leave their past life behind. Changing from developer to product owner/manager is changing your identity, it is hard.

The mistake that I see again and again is that the individuals – sometimes encouraged by those around them – continue to wear a developer hat. This means they don’t step into their new identity. They spread themselves thinly between two roles and their opinion divided. They seem themselves as capable of everything rather than specialist in one so don’t devote the time to both learn their new role and mentally change their perspective.

Imagine a hybrid developer/product manager comes back from lunch and has three or four hours spare – of course this never happens but just run with this thought experiment. Are they best: (a) pulling the highest priority item from the backlog and getting it done, or, (b) reviewing the latest customer interviews, site metrics, and perhaps picking up the phone and calling an existing customer?

Coding up a story clearly adds value, it reduces the backlog and enhances the product directly. Picking up the phone and analysing data may not have an immediate effect or enhance the product today, the payoff will come over weeks and months as better decisions are made and customers served better.

Product owners/managers need to empathise with customers and potential customers, they need to feel the pain of the business and see the opportunities in the market. Skilled coders feel the code, they hear it asking to be refactored; they dream about enhancing it in place; they worry about weaknesses, the places were coupling is too high and tests too few. In short, coders empathise with the code.

It is good that product people empathise with customers and coders with the code. But what happens when those things come into conflict? The code is crying out for a refactoring and customers demanding a feature? Ultimately it will be a judgement call – although both side may believe the answer is obvious.

If the code is represented by one person and the customer by another then they can have a discussion, balance priorities and options. If you ask on person to fill both roles then they need to have an argument with themselves, this is not good for their mental health or the final decision.

These problems are especially acute when the developer in question is either very experienced or very good – or both. They come to represent the product and champion it. But that makes the balancing act even more difficult. It also means that those understand when a No is a no because there is no business justification and when No is no because the code is a mess.

Hence I want the roles of developer and product specialist kept separate.

In a small company, say, less than 10 people, it can be hard to avoid this situation. And when the product is new technology or and API it is often difficult to disentangle “what the customer will pay for” from “what the technology can do” but those traps make it more important that a company addresses this when it grows.

So my advice is simple: the key thing is that the individual changing roles needs to put coding behind them – and step away from the keyboard. I know that seems hard but to fill the product owner/manager role properly one has to live it.

I usually recommend the person in question away for training. And I do mean away (lets hope we can travel again soon!). The person changing roles needs to immerse themselves in their new life. Sitting in a classroom with others helps make the psychological switch.

When I did it – with Pragmatic Marketing (now Pragmatic Institute) – the training was difficult to get in Europe so I went to the USA which added to the experience. Product manager culture is more developed in the USA than elsewhere – and even more developed on the West Coast simply because it has been there longer.

Going somewhere different and immersing yourself in a new culture and new ideas is a great way of breaking with your past and creating a new identity for yourself.


Subscribe to my blog newsletter and download Continuous Digital for free – normal price $9.99/£9.95/€9.95

The post Developer becoming a product owner/product manager? appeared first on Allan Kelly Associates.

Software engineering experiments: sell the idea, not the results

Derek Jones from The Shape of Code

A new paper investigates “… the feasibility of stealthily introducing vulnerabilities in OSS via hypocrite commits (i.e., seemingly beneficial commits that in fact introduce other critical issues).” Their chosen Open source project was the Linux kernel, and they submitted three patches to the kernel review process.

This interesting idea blew up in their faces, when the kernel developers deduced that they were being experimented on (they obviously don’t have a friend on the inside). The authors have come out dodging and weaving.

What can be learned by reading the paper?

Firstly, three ‘hypocrite commits’ is not enough submissions to do any meaningful statistical analysis. I suspect it’s a convenience sample, a common occurrence in software engineering research. The authors sell three as a proof-of-concept.

How many of the submitted patches passed the kernel review process?

The paper does not say. The first eight pages provide an introduction to the Open source development model, the threat model for introducing vulnerabilities, and the characteristics of vulnerabilities that have been introduced (presumably by accident). This is followed by 2.5 pages of background and setup of the experiment (labelled as a proof-of-concept).

The paper then switches (section VII) to discussing a different, but related, topic: the lifetime of (unintended) vulnerabilities in patches that had been accepted (which I think should have been the topic of the paper. This interesting discussion is 1.5 pages; also see The life and death of statically detected vulnerabilities: An empirical study, covered in figure 6.9 in my book.

The last two pages discuss mitigation, related work, and conclusion (“…a proof-of-concept to safely demonstrate the practicality of hypocrite commits, and measured and quantified the risks.”; three submissions is not hard to measure and quantify, but the results are not to be found in the paper).

Having the paper provide the results (i.e., all three commits spotted, and a very negative response by those being experimented on) would have increased the chances of negative reviewer comments.

Over the past few years I have started noticing this kind of structure in software engineering papers, i.e., extended discussion of an interesting idea, setup of experiment, and cursory or no discussion of results. Many researchers are willing to spend lots of time discussing their ideas, but are unwilling to invest much time in the practicalities of testing them. Some reviewers (who decide whether a paper is accepted to publication) don’t see anything wrong with this approach, e.g., they accept these kinds of papers.

Software engineering research remains a culture of interesting ideas, with evidence being an optional add-on.

Republishing Bartosz Milewski’s Category Theory lectures

Andy Balaam from Andy Balaam's Blog

Category Theory is an incredibly exciting and challenging area of Maths, that (among other things) can really help us understand what programming is on a fundamental level, and make us better programmers.

By far the best explanation of Category Theory that I have ever seen is a series of videos by Bartosz Milewski on YouTube.

The videos have quite a bit of background noise, and they were not available on PeerTube, so I asked for permission to edit and repost them, and Bartosz generously agreed! The conversation was in the comments section of Category Theory 1.1: Motivation and Philosophy and I reproduce it below.

So, I present these awesome videos, with background noise removed using Audacity, for your enjoyment:

Category Theory by Bartosz Milewski

Permission details:

Andy Balaam: Utterly brilliant lecture series.  Is it available under a free license?  I'd like to try and clean up audio and repost it to PeerTube, if that is permitted. Bartosz Milewski: You have my permission. I consider my lectures public domain.

Andy Balaam: Utterly brilliant lecture series. Is it available under a free license? I’d like to try and clean up audio and repost it to PeerTube, if that is permitted.
Bartosz Milewski: You have my permission. I consider my lectures public domain.

Visual Lint 7.0.12.336 has been released

Products, the Universe and Everything from Products, the Universe and Everything

Visual Lint 7.0.12.336 is a recommended maintenance update for Visual Lint 7.0. The following changes are included:

  • Updated the interface with IncrediBuild to support IncrediBuild 9.5.x.

  • IncrediBuild analysis tasks can now be colour coded in the IncrediBuild Build Monitor from within the Visual Studio, Atmel Studio and Eclipse plug-ins.

  • Fixed a bug which could cause duplicate IncrediBuild analysis tasks to be queued.

  • Updated the prompt displayed if an IncrediBuild installation was not found when IncrediBuild analysis was enabled.

  • Various corrections and updates to help topics.

Download Visual Lint 7.0.12.336

Visual Lint 8.0.1.337 has been released

Products, the Universe and Everything from Products, the Universe and Everything

Visual Lint 8.0.1.337 is a recommended maintenance update for Visual Lint 8.0. The following changes are included:

  • If the Visual Studio plugin is selected for installation and the Visual Studio Debug Console (VsDebugConsole.exe) is running, the installer will now ask you to close it before installation can proceed.

  • Updated the interface with IncrediBuild to support IncrediBuild 9.5.x. [also in Visual Lint 7.0.12.336]

  • IncrediBuild analysis tasks can now be colour coded in the IncrediBuild Build Monitor from within the Visual Studio, Atmel Studio and Eclipse plug-ins. [also in Visual Lint 7.0.12.336]

  • Fixed a bug which could cause duplicate IncrediBuild analysis tasks to be queued. [also in Visual Lint 7.0.12.336]

  • Updated the prompt displayed if an IncrediBuild installation was not found when IncrediBuild analysis was enabled. [also in Visual Lint 7.0.12.336]

  • Updated the values of _MSC_VER and _MSC_FULL_VER in the PC-lint Plus compiler indirect file co-rb-vs2019.lnt to support Visual Studio 2019 v16.4.4.

  • Updated the values of _MSC_VER and _MSC_FULL_VER in the PC-lint Plus compiler indirect file co-rb-vs2017.lnt to support Visual Studio 2017 v15.9.35.

  • Various corrections and updates to help topics.

Download Visual Lint 8.0.1.337

Python virtual environments with pyenv on Apple Silicon

Ekaterina Nikonova from Good With Computers

Apple's recent transition to the new architecture for its Mac computers has caused rather predictable problems for developers whose workflow depends on certain versions of pre-compiled libraries for x86 architecture. While the latest releases of Python come with a universal installer that allows to build universal binaries for M1 systems, those who prefer to manage Python environments with pyenv, may find it difficult to choose the correct version for installation.

This problem can be solved by installing both x86 and arm64 Python executables. To do that, we need to be able to run pyenv in x86 mode and make sure that all system dependencies are met for both architectures. In other words, we'll need both x86 and arm64 Homebrew packages that we'll keep separate using two installations of Homebrew.

First of all, to be able to run x86 executables, we'll need to install Rosetta:

$ softwareupdate —install-rosetta

Now we can install the x86 Homebrew:

$ arch -x86_64 /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

It will be installed in the /usr/local/bin/ directory. For convenience, you can create an alias by adding the following line in your shell configuration file:

alias brew86="arch -x86_64 /usr/local/bin/brew"

Now we can invoke the x86 Homebrew as brew86 and install packages required by pyenv:

$ brew install openssl readline sqlite3 xz zlib

$ brew86 install openssl readline sqlite3 xz zlib

You can check whether the installation was successful and you have packages for both architectures using the file command, for example:

$ file /opt/homebrew/Cellar/openssl@1.1/1.1.1k/bin/openssl
/opt/homebrew/Cellar/openssl@1.1/1.1.1k/bin/openssl: Mach-O 64-bit executable arm64

$ file /usr/local/Cellar/openssl@1.1/1.1.1k/bin/openssl
/usr/local/Cellar/openssl@1.1/1.1.1k/bin/openssl: Mach-O 64-bit executable x86_64

To install x86 Python, you'll need to call pyenv with the arch -x86_64 prefix. For convenience, let's create an alias for this command by adding the following line in the shell config file:

alias pyenv86="arch -x86_64 pyenv"

Now you can install x86 Python binaries by calling:

$ pyenv86 install <PYTHON_VERSION>

By default, pyenv doesn't allow you to specify custom names for the installed Python versions, but you can use the pyenv-alias plugin to give your installations appropriate names:

$ VERSION_ALIAS="3.x.x_x86" pyenv86 install 3.x.x

Note that with aliases for your pyenv and Homebrew installations, you’ll have to specify them in all commands and locations, for example:

$ CFLAGS="-I$(brew86 --prefix openssl)/include" \
LDFLAGS="-L$(brew86 --prefix openssl)/lib" \
VERSION_ALIAS="3.x.x_x86" \
pyenv86 install -v 3.x.x

Using atomics for thread synchronization in C++

Anthony Williams from Just Software Solutions Blog

In my previous blog post I wrote about spin locks, and how compilers must not move the locking loop above a prior unlock.

After thinking about this done more, I realised that is not something specific to locks — the same issue arises with any two step synchronization between threads.

Consider the following code

std::atomic<bool> ready1{false};
std::atomic<bool> ready2{false};

void thread1(){
  ready1.store(true, std::memory_order_release);
  while(!ready2.load(std::memory_order_acquire)){}
}

void thread2() {
  while(!ready1.load(std::memory_order_acquire)) {}
  ready2.store(true, std::memory_order_release);
}

thread1 sets ready1 to true, then waits for thread2 to set ready2 to true. Meanwhile, thread2 waits for ready1 to be true, then sets ready2 to true.

This is almost identical to the unlock/lock case from the previous blog post, except the waiting thread is just using plain load rather than exchange.

If the compiler moves the wait loop in thread1 above the store then both threads will hang forever. However it cannot do this for the same reason the spinlocks can't deadlock in the previous post: the store has to be visible to the other thread in a finite period if time, so must be issued before the wait loop. https://eel.is/c++draft/intro.multithread#intro.progress-18

An implementation should ensure that the last value (in modification order) assigned by an atomic or synchronization operation will become visible to all other threads in a finite period of time.

If the optimizer moved the store across the loop in thread1, then it could not guarantee that the value became visible to the other thread in a finite period of time. Therefore such an optimization is forbidden.

Posted by Anthony Williams
[/ cplusplus /] permanent link
Tags: , , ,
Stumble It! stumbleupon logo | Submit to Reddit reddit logo | Submit to DZone dzone logo

Comment on this post

Follow me on Twitter

Another nail for the coffin of past effort estimation research

Derek Jones from The Shape of Code

Programs are built from lines of code written by programmers. Lines of code played a starring role in many early effort estimation techniques (section 5.3.1 of my book). Why would anybody think that it was even possible to accurately estimate the number of lines of code needed to implement a library/program, let alone use it for estimating effort?

Until recently, say up to the early 1990s, there were lots of different computer systems, some with multiple (incompatible’ish) operating systems, almost non-existent selection of non-vendor supplied libraries/packages, and programs providing more-or-less the same functionality were written more-or-less from scratch by different people/teams. People knew people who had done it before, or even done it before themselves, so information on lines of code was available.

The numeric values for the parameters appearing in models were obtained by fitting data on recorded effort and lines needed to implement various programs (63 sets of values, one for each of the 63 programs in the case of COCOMO).

How accurate is estimated lines of code likely to be (this estimate will be plugged into a model fitted using actual lines of code)?

I’m not asking about the accuracy of effort estimates calculated using techniques based on lines of code; studies repeatedly show very poor accuracy.

There is data showing that different people implement the same functionality with programs containing a wide range of number of lines of code, e.g., the 3n+1 problem.

I recently discovered, tucked away in a dataset I had previously analyzed, developer estimates of the number of lines of code they expected to add/modify/delete to implement some functionality, along with the actuals.

The following plot shows estimated added+modified lines of code against actual, for 2,692 tasks. The fitted regression line, in red, is: Actual = 5.9Estimated^{0.72} (the standard error on the exponent is pm 0.02), the green line shows Actual==Estimated (code+data):

Estimated and actual lines of code added+modified to implement a task.

The fitted red line, for lines of code, shows the pattern commonly seen with effort estimation, i.e., underestimating small values and over estimating large values; but there is a much wider spread of actuals, and the cross-over point is much further up (if estimates below 50-lines are excluded, the exponent increases to 0.92, and the intercept decreases to 2, and the line shifts a bit.). The vertical river of actuals either side of the 10-LOC estimate looks very odd (estimating such small values happen when people estimate everything).

My article pointing out that software effort estimation is mostly fake research has been widely read (it appears in the first three results returned by a Google search on software fake research). The early researchers did some real research to build these models, but later researchers have been blindly following the early ‘prophets’ (i.e., later research is fake).

Lines of code probably does have an impact on effort, but estimating lines of code is a fool’s errand, and plugging estimates into models built from actuals is just crazy.